Jan 29 16:09:52 crc systemd[1]: Starting Kubernetes Kubelet... Jan 29 16:09:52 crc restorecon[4699]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:52 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:09:53 crc restorecon[4699]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:09:53 crc restorecon[4699]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 29 16:09:53 crc kubenswrapper[4714]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:09:53 crc kubenswrapper[4714]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 29 16:09:53 crc kubenswrapper[4714]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:09:53 crc kubenswrapper[4714]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:09:53 crc kubenswrapper[4714]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:09:53 crc kubenswrapper[4714]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.941267 4714 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949558 4714 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949597 4714 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949607 4714 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949616 4714 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949625 4714 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949634 4714 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949642 4714 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949652 4714 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949661 4714 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949672 4714 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949680 4714 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949687 4714 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949696 4714 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949706 4714 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949717 4714 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949725 4714 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949734 4714 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949743 4714 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949751 4714 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949759 4714 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949768 4714 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949775 4714 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949783 4714 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949790 4714 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949798 4714 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949806 4714 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949814 4714 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949831 4714 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949839 4714 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949847 4714 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949855 4714 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949863 4714 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949871 4714 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949878 4714 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949886 4714 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949893 4714 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949901 4714 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949909 4714 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949917 4714 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949925 4714 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949965 4714 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949974 4714 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949981 4714 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949989 4714 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.949996 4714 feature_gate.go:330] unrecognized feature gate: Example Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.950004 4714 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.950012 4714 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.950019 4714 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.950027 4714 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.950034 4714 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.950042 4714 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.950050 4714 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.950057 4714 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.950069 4714 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.950078 4714 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.950087 4714 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.950099 4714 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.950108 4714 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.950118 4714 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.950130 4714 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.950142 4714 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.950151 4714 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.950160 4714 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.950169 4714 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.950177 4714 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.950185 4714 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.950193 4714 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.950201 4714 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.950209 4714 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.950217 4714 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.950225 4714 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950360 4714 flags.go:64] FLAG: --address="0.0.0.0" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950377 4714 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950391 4714 flags.go:64] FLAG: --anonymous-auth="true" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950408 4714 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950421 4714 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950463 4714 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950475 4714 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950485 4714 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950495 4714 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950504 4714 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950513 4714 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950525 4714 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950535 4714 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950544 4714 flags.go:64] FLAG: --cgroup-root="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950552 4714 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950561 4714 flags.go:64] FLAG: --client-ca-file="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950570 4714 flags.go:64] FLAG: --cloud-config="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950579 4714 flags.go:64] FLAG: --cloud-provider="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950587 4714 flags.go:64] FLAG: --cluster-dns="[]" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950599 4714 flags.go:64] FLAG: --cluster-domain="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950608 4714 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950617 4714 flags.go:64] FLAG: --config-dir="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950626 4714 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950636 4714 flags.go:64] FLAG: --container-log-max-files="5" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950648 4714 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950657 4714 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950666 4714 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950677 4714 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950686 4714 flags.go:64] FLAG: --contention-profiling="false" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950695 4714 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950703 4714 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950713 4714 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950722 4714 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950733 4714 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950742 4714 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950751 4714 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950760 4714 flags.go:64] FLAG: --enable-load-reader="false" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950769 4714 flags.go:64] FLAG: --enable-server="true" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950778 4714 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950790 4714 flags.go:64] FLAG: --event-burst="100" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950800 4714 flags.go:64] FLAG: --event-qps="50" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950809 4714 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950818 4714 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950827 4714 flags.go:64] FLAG: --eviction-hard="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950837 4714 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950846 4714 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950855 4714 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950865 4714 flags.go:64] FLAG: --eviction-soft="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950875 4714 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950883 4714 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950892 4714 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950901 4714 flags.go:64] FLAG: --experimental-mounter-path="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950910 4714 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950919 4714 flags.go:64] FLAG: --fail-swap-on="true" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950928 4714 flags.go:64] FLAG: --feature-gates="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950967 4714 flags.go:64] FLAG: --file-check-frequency="20s" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950976 4714 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950986 4714 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.950995 4714 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951005 4714 flags.go:64] FLAG: --healthz-port="10248" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951014 4714 flags.go:64] FLAG: --help="false" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951022 4714 flags.go:64] FLAG: --hostname-override="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951031 4714 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951040 4714 flags.go:64] FLAG: --http-check-frequency="20s" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951049 4714 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951058 4714 flags.go:64] FLAG: --image-credential-provider-config="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951066 4714 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951075 4714 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951084 4714 flags.go:64] FLAG: --image-service-endpoint="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951093 4714 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951102 4714 flags.go:64] FLAG: --kube-api-burst="100" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951110 4714 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951120 4714 flags.go:64] FLAG: --kube-api-qps="50" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951128 4714 flags.go:64] FLAG: --kube-reserved="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951137 4714 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951147 4714 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951156 4714 flags.go:64] FLAG: --kubelet-cgroups="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951165 4714 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951174 4714 flags.go:64] FLAG: --lock-file="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951182 4714 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951193 4714 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951202 4714 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951215 4714 flags.go:64] FLAG: --log-json-split-stream="false" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951227 4714 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951235 4714 flags.go:64] FLAG: --log-text-split-stream="false" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951245 4714 flags.go:64] FLAG: --logging-format="text" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951253 4714 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951263 4714 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951272 4714 flags.go:64] FLAG: --manifest-url="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951281 4714 flags.go:64] FLAG: --manifest-url-header="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951293 4714 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951303 4714 flags.go:64] FLAG: --max-open-files="1000000" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951314 4714 flags.go:64] FLAG: --max-pods="110" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951323 4714 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951332 4714 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951342 4714 flags.go:64] FLAG: --memory-manager-policy="None" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951350 4714 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951359 4714 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951368 4714 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951377 4714 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951399 4714 flags.go:64] FLAG: --node-status-max-images="50" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951409 4714 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951418 4714 flags.go:64] FLAG: --oom-score-adj="-999" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951427 4714 flags.go:64] FLAG: --pod-cidr="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951436 4714 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951451 4714 flags.go:64] FLAG: --pod-manifest-path="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951460 4714 flags.go:64] FLAG: --pod-max-pids="-1" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951470 4714 flags.go:64] FLAG: --pods-per-core="0" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951479 4714 flags.go:64] FLAG: --port="10250" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951488 4714 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951498 4714 flags.go:64] FLAG: --provider-id="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951506 4714 flags.go:64] FLAG: --qos-reserved="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951515 4714 flags.go:64] FLAG: --read-only-port="10255" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951524 4714 flags.go:64] FLAG: --register-node="true" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951533 4714 flags.go:64] FLAG: --register-schedulable="true" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951542 4714 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951557 4714 flags.go:64] FLAG: --registry-burst="10" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951567 4714 flags.go:64] FLAG: --registry-qps="5" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951576 4714 flags.go:64] FLAG: --reserved-cpus="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951586 4714 flags.go:64] FLAG: --reserved-memory="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951596 4714 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951606 4714 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951614 4714 flags.go:64] FLAG: --rotate-certificates="false" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951623 4714 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951632 4714 flags.go:64] FLAG: --runonce="false" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951641 4714 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951651 4714 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951660 4714 flags.go:64] FLAG: --seccomp-default="false" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951669 4714 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951678 4714 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951687 4714 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951697 4714 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951706 4714 flags.go:64] FLAG: --storage-driver-password="root" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951715 4714 flags.go:64] FLAG: --storage-driver-secure="false" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951723 4714 flags.go:64] FLAG: --storage-driver-table="stats" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951732 4714 flags.go:64] FLAG: --storage-driver-user="root" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951741 4714 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951750 4714 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951759 4714 flags.go:64] FLAG: --system-cgroups="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951768 4714 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951783 4714 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951791 4714 flags.go:64] FLAG: --tls-cert-file="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951800 4714 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951811 4714 flags.go:64] FLAG: --tls-min-version="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951819 4714 flags.go:64] FLAG: --tls-private-key-file="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951828 4714 flags.go:64] FLAG: --topology-manager-policy="none" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951837 4714 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951846 4714 flags.go:64] FLAG: --topology-manager-scope="container" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951855 4714 flags.go:64] FLAG: --v="2" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951867 4714 flags.go:64] FLAG: --version="false" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951878 4714 flags.go:64] FLAG: --vmodule="" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951889 4714 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.951898 4714 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952138 4714 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952149 4714 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952159 4714 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952168 4714 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952176 4714 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952185 4714 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952195 4714 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952205 4714 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952214 4714 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952224 4714 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952234 4714 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952243 4714 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952252 4714 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952260 4714 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952268 4714 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952277 4714 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952286 4714 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952295 4714 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952330 4714 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952341 4714 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952350 4714 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952359 4714 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952367 4714 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952374 4714 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952382 4714 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952391 4714 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952399 4714 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952407 4714 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952415 4714 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952425 4714 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952434 4714 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952444 4714 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952453 4714 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952461 4714 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952469 4714 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952478 4714 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952486 4714 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952497 4714 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952507 4714 feature_gate.go:330] unrecognized feature gate: Example Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952515 4714 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952523 4714 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952531 4714 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952539 4714 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952546 4714 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952554 4714 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952562 4714 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952569 4714 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952578 4714 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952586 4714 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952594 4714 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952602 4714 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952610 4714 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952617 4714 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952625 4714 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952633 4714 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952640 4714 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952648 4714 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952656 4714 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952664 4714 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952672 4714 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952681 4714 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952689 4714 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952697 4714 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952705 4714 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952713 4714 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952720 4714 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952728 4714 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952736 4714 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952743 4714 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952751 4714 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.952759 4714 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.955674 4714 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.968277 4714 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.968330 4714 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968490 4714 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968510 4714 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968522 4714 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968533 4714 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968544 4714 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968554 4714 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968564 4714 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968574 4714 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968585 4714 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968595 4714 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968605 4714 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968615 4714 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968626 4714 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968636 4714 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968646 4714 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968655 4714 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968666 4714 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968675 4714 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968686 4714 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968696 4714 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968706 4714 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968716 4714 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968726 4714 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968736 4714 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968747 4714 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968758 4714 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968767 4714 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968777 4714 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968786 4714 feature_gate.go:330] unrecognized feature gate: Example Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968797 4714 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968807 4714 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968817 4714 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968827 4714 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968837 4714 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968850 4714 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968860 4714 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968871 4714 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968882 4714 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968892 4714 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968902 4714 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968912 4714 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968922 4714 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968966 4714 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968977 4714 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.968987 4714 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969001 4714 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969016 4714 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969027 4714 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969039 4714 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969051 4714 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969062 4714 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969072 4714 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969086 4714 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969100 4714 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969114 4714 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969129 4714 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969140 4714 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969150 4714 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969161 4714 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969171 4714 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969185 4714 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969199 4714 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969212 4714 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969224 4714 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969236 4714 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969247 4714 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969258 4714 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969268 4714 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969279 4714 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969290 4714 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969304 4714 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.969324 4714 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969628 4714 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969649 4714 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969661 4714 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969672 4714 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969682 4714 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969692 4714 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969702 4714 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969713 4714 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969724 4714 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969734 4714 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969748 4714 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969764 4714 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969775 4714 feature_gate.go:330] unrecognized feature gate: Example Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969786 4714 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969797 4714 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969807 4714 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969817 4714 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969827 4714 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969837 4714 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969848 4714 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969857 4714 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969868 4714 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969878 4714 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969888 4714 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969899 4714 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969909 4714 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969919 4714 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969962 4714 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969974 4714 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969985 4714 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.969996 4714 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970007 4714 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970017 4714 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970027 4714 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970039 4714 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970050 4714 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970064 4714 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970078 4714 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970089 4714 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970099 4714 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970109 4714 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970119 4714 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970130 4714 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970140 4714 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970149 4714 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970160 4714 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970170 4714 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970180 4714 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970190 4714 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970199 4714 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970210 4714 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970220 4714 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970230 4714 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970240 4714 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970250 4714 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970261 4714 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970273 4714 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970283 4714 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970293 4714 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970303 4714 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970314 4714 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970325 4714 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970335 4714 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970348 4714 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970362 4714 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970373 4714 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970386 4714 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970399 4714 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970409 4714 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970419 4714 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 16:09:53 crc kubenswrapper[4714]: W0129 16:09:53.970431 4714 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.970447 4714 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.971570 4714 server.go:940] "Client rotation is on, will bootstrap in background" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.978720 4714 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.978915 4714 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.980970 4714 server.go:997] "Starting client certificate rotation" Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.981024 4714 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.982208 4714 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-22 23:29:39.189693331 +0000 UTC Jan 29 16:09:53 crc kubenswrapper[4714]: I0129 16:09:53.982358 4714 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.009868 4714 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 16:09:54 crc kubenswrapper[4714]: E0129 16:09:54.012657 4714 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.016546 4714 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.037807 4714 log.go:25] "Validated CRI v1 runtime API" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.079814 4714 log.go:25] "Validated CRI v1 image API" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.084400 4714 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.090222 4714 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-29-16-05-32-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.090258 4714 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.104584 4714 manager.go:217] Machine: {Timestamp:2026-01-29 16:09:54.102181766 +0000 UTC m=+0.622682896 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:1ab8f43b-7f84-4fd2-a80a-2aae14146bf4 BootID:856e4040-197b-4e74-9239-c0ebcf6976ae Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:44:c3:66 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:44:c3:66 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:5f:e7:32 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:ca:2c:4b Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:5b:93:26 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:e9:13:2b Speed:-1 Mtu:1496} {Name:eth10 MacAddress:4e:91:6c:ac:7a:42 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:d6:cd:d5:d7:8c:20 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.104786 4714 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.104877 4714 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.105150 4714 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.105287 4714 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.105312 4714 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.105479 4714 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.105489 4714 container_manager_linux.go:303] "Creating device plugin manager" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.105914 4714 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.105952 4714 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.106071 4714 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.106146 4714 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.110481 4714 kubelet.go:418] "Attempting to sync node with API server" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.110501 4714 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.110521 4714 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.110532 4714 kubelet.go:324] "Adding apiserver pod source" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.110542 4714 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.114579 4714 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.115852 4714 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.117671 4714 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:09:54 crc kubenswrapper[4714]: W0129 16:09:54.118583 4714 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 29 16:09:54 crc kubenswrapper[4714]: E0129 16:09:54.118643 4714 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:09:54 crc kubenswrapper[4714]: W0129 16:09:54.118718 4714 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 29 16:09:54 crc kubenswrapper[4714]: E0129 16:09:54.118813 4714 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.118971 4714 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.118994 4714 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.119001 4714 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.119018 4714 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.119029 4714 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.119036 4714 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.119066 4714 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.119078 4714 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.119087 4714 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.119095 4714 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.119114 4714 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.119121 4714 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.119732 4714 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.120218 4714 server.go:1280] "Started kubelet" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.120849 4714 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.121203 4714 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:09:54 crc systemd[1]: Started Kubernetes Kubelet. Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.121880 4714 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.122531 4714 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.123880 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.123977 4714 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.124032 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 23:07:12.104438175 +0000 UTC Jan 29 16:09:54 crc kubenswrapper[4714]: E0129 16:09:54.124138 4714 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.124267 4714 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.124287 4714 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.124398 4714 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 16:09:54 crc kubenswrapper[4714]: E0129 16:09:54.124552 4714 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="200ms" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.124997 4714 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.125015 4714 factory.go:55] Registering systemd factory Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.125024 4714 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.125054 4714 server.go:460] "Adding debug handlers to kubelet server" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.125700 4714 factory.go:153] Registering CRI-O factory Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.125738 4714 factory.go:221] Registration of the crio container factory successfully Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.125772 4714 factory.go:103] Registering Raw factory Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.125794 4714 manager.go:1196] Started watching for new ooms in manager Jan 29 16:09:54 crc kubenswrapper[4714]: W0129 16:09:54.126057 4714 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 29 16:09:54 crc kubenswrapper[4714]: E0129 16:09:54.126179 4714 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.129087 4714 manager.go:319] Starting recovery of all containers Jan 29 16:09:54 crc kubenswrapper[4714]: E0129 16:09:54.137522 4714 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.46:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f3f7f327a95fb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 16:09:54.120185339 +0000 UTC m=+0.640686459,LastTimestamp:2026-01-29 16:09:54.120185339 +0000 UTC m=+0.640686459,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.143364 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.143455 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.143488 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.143515 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.143543 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.143570 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.143595 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.143621 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.143650 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.143680 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.143708 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.143734 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.143759 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.143791 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.143817 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.143846 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.143873 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.143902 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.143928 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144085 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144108 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144130 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144151 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144173 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144193 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144213 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144238 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144293 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144372 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144399 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144426 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144453 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144483 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144509 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144535 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144566 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144594 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144624 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144650 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144677 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144702 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144727 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144753 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144780 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144807 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144874 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144906 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.144968 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145004 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145032 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145059 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145087 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145126 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145160 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145189 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145218 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145245 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145274 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145302 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145328 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145357 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145384 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145409 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145436 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145467 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145506 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145537 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145563 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145589 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145616 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145642 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145667 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145692 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145719 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145747 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145773 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145799 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145826 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145853 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145879 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145922 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.145993 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.146020 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.146045 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.146074 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.146105 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.146133 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.146158 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.147960 4714 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148275 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148294 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148309 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148323 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148336 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148353 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148369 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148384 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148413 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148428 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148448 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148461 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148476 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148490 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148501 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148517 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148541 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148558 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148575 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148590 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148603 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148617 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148630 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148647 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148663 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148677 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148692 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148706 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148717 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148734 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148748 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148764 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148778 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148791 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148805 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148822 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148837 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148850 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148864 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148878 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148889 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148952 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148969 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148980 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.148991 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149004 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149017 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149030 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149041 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149057 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149069 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149082 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149095 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149108 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149120 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149142 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149157 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149171 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149185 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149198 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149211 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149225 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149236 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149250 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149264 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149276 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149288 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149302 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149315 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149327 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149337 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149351 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149365 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149376 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149387 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149397 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149407 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149422 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149434 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149446 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149460 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149472 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149484 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149496 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149508 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149520 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149533 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149547 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149557 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149569 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149582 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149594 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149607 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149619 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149631 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149644 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149655 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149667 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149678 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149692 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149705 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149716 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149729 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149741 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149753 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149766 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149778 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149821 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149835 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149882 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149954 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149978 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.149996 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.150017 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.150034 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.150050 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.150066 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.150082 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.150097 4714 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.150111 4714 reconstruct.go:97] "Volume reconstruction finished" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.150122 4714 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.154332 4714 manager.go:324] Recovery completed Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.166195 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.170006 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.170050 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.170067 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.171678 4714 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.171701 4714 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.171723 4714 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.180743 4714 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.182858 4714 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.182896 4714 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.182923 4714 kubelet.go:2335] "Starting kubelet main sync loop" Jan 29 16:09:54 crc kubenswrapper[4714]: E0129 16:09:54.182986 4714 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:09:54 crc kubenswrapper[4714]: W0129 16:09:54.187156 4714 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 29 16:09:54 crc kubenswrapper[4714]: E0129 16:09:54.187252 4714 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.190253 4714 policy_none.go:49] "None policy: Start" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.190959 4714 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.190984 4714 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:09:54 crc kubenswrapper[4714]: E0129 16:09:54.224477 4714 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.252396 4714 manager.go:334] "Starting Device Plugin manager" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.252669 4714 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.252698 4714 server.go:79] "Starting device plugin registration server" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.253195 4714 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.253222 4714 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.253390 4714 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.253528 4714 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.253557 4714 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:09:54 crc kubenswrapper[4714]: E0129 16:09:54.262526 4714 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.283886 4714 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.284012 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.285301 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.285339 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.285349 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.285481 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.285675 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.285717 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.286243 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.286266 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.286274 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.286652 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.286693 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.286706 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.286850 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.286991 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.287030 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.287618 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.287665 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.287673 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.287738 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.287755 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.287849 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.287755 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.287885 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.287905 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.288424 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.288449 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.288457 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.288541 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.288616 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.288660 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.289144 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.289167 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.289175 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.289287 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.289307 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.289731 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.289752 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.289760 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.289796 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.289812 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.289822 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.289910 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.289926 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.289961 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:54 crc kubenswrapper[4714]: E0129 16:09:54.324988 4714 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="400ms" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.353520 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.353581 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.353630 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.353672 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.353731 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.353763 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.353780 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.353804 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.353922 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.353971 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.354009 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.354033 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.354056 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.354092 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.354118 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.354172 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.354436 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.354462 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.354469 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.354490 4714 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:09:54 crc kubenswrapper[4714]: E0129 16:09:54.354968 4714 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.46:6443: connect: connection refused" node="crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.454790 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.454872 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.454908 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.454987 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.455006 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.455059 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.455087 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.455092 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.455085 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.455019 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.455175 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.455204 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.455232 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.455260 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.455290 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.455333 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.455378 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.455397 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.455424 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.455378 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.455446 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.455461 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.455475 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.455477 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.455475 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.455484 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.455420 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.455623 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.455534 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.455587 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.555872 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.557320 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.557368 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.557386 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.557417 4714 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:09:54 crc kubenswrapper[4714]: E0129 16:09:54.558012 4714 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.46:6443: connect: connection refused" node="crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.623774 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.647152 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.654347 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: W0129 16:09:54.663095 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-593d0f6b670c6966a3bbaadaefb40a1ad42f1fa8edf9796bf0b8d0301dc388ac WatchSource:0}: Error finding container 593d0f6b670c6966a3bbaadaefb40a1ad42f1fa8edf9796bf0b8d0301dc388ac: Status 404 returned error can't find the container with id 593d0f6b670c6966a3bbaadaefb40a1ad42f1fa8edf9796bf0b8d0301dc388ac Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.672979 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: W0129 16:09:54.674060 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-999f1dcb3adbb51cdabe6027fe9de8017a8a05762d46b433c8d9e7661a1bbc43 WatchSource:0}: Error finding container 999f1dcb3adbb51cdabe6027fe9de8017a8a05762d46b433c8d9e7661a1bbc43: Status 404 returned error can't find the container with id 999f1dcb3adbb51cdabe6027fe9de8017a8a05762d46b433c8d9e7661a1bbc43 Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.679188 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:09:54 crc kubenswrapper[4714]: W0129 16:09:54.692003 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-68be8e1eb6567776239bcd622bc35f2239a081987843d4b6a00afa70ddd507c6 WatchSource:0}: Error finding container 68be8e1eb6567776239bcd622bc35f2239a081987843d4b6a00afa70ddd507c6: Status 404 returned error can't find the container with id 68be8e1eb6567776239bcd622bc35f2239a081987843d4b6a00afa70ddd507c6 Jan 29 16:09:54 crc kubenswrapper[4714]: W0129 16:09:54.701719 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-239eb8666b3fe82cedcc35d13f0a5d821d908c81ea42456e4d6f8a03ec8fd490 WatchSource:0}: Error finding container 239eb8666b3fe82cedcc35d13f0a5d821d908c81ea42456e4d6f8a03ec8fd490: Status 404 returned error can't find the container with id 239eb8666b3fe82cedcc35d13f0a5d821d908c81ea42456e4d6f8a03ec8fd490 Jan 29 16:09:54 crc kubenswrapper[4714]: E0129 16:09:54.725880 4714 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="800ms" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.959084 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.960291 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.960348 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.960360 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:54 crc kubenswrapper[4714]: I0129 16:09:54.960394 4714 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:09:54 crc kubenswrapper[4714]: E0129 16:09:54.960837 4714 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.46:6443: connect: connection refused" node="crc" Jan 29 16:09:54 crc kubenswrapper[4714]: W0129 16:09:54.964590 4714 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 29 16:09:54 crc kubenswrapper[4714]: E0129 16:09:54.964666 4714 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:09:55 crc kubenswrapper[4714]: W0129 16:09:55.047554 4714 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 29 16:09:55 crc kubenswrapper[4714]: E0129 16:09:55.047640 4714 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:09:55 crc kubenswrapper[4714]: I0129 16:09:55.121625 4714 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 29 16:09:55 crc kubenswrapper[4714]: I0129 16:09:55.124723 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 14:54:20.541537821 +0000 UTC Jan 29 16:09:55 crc kubenswrapper[4714]: I0129 16:09:55.186335 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"239eb8666b3fe82cedcc35d13f0a5d821d908c81ea42456e4d6f8a03ec8fd490"} Jan 29 16:09:55 crc kubenswrapper[4714]: I0129 16:09:55.186971 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"68be8e1eb6567776239bcd622bc35f2239a081987843d4b6a00afa70ddd507c6"} Jan 29 16:09:55 crc kubenswrapper[4714]: I0129 16:09:55.187718 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a1d20cbc534d7ed08405bfd39639adc94428bd1e01a3e749031288f444f59d07"} Jan 29 16:09:55 crc kubenswrapper[4714]: I0129 16:09:55.188375 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"999f1dcb3adbb51cdabe6027fe9de8017a8a05762d46b433c8d9e7661a1bbc43"} Jan 29 16:09:55 crc kubenswrapper[4714]: I0129 16:09:55.189148 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"593d0f6b670c6966a3bbaadaefb40a1ad42f1fa8edf9796bf0b8d0301dc388ac"} Jan 29 16:09:55 crc kubenswrapper[4714]: W0129 16:09:55.348117 4714 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 29 16:09:55 crc kubenswrapper[4714]: E0129 16:09:55.348211 4714 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:09:55 crc kubenswrapper[4714]: W0129 16:09:55.397269 4714 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 29 16:09:55 crc kubenswrapper[4714]: E0129 16:09:55.397349 4714 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:09:55 crc kubenswrapper[4714]: E0129 16:09:55.526713 4714 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="1.6s" Jan 29 16:09:55 crc kubenswrapper[4714]: I0129 16:09:55.761099 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:55 crc kubenswrapper[4714]: I0129 16:09:55.762140 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:55 crc kubenswrapper[4714]: I0129 16:09:55.762184 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:55 crc kubenswrapper[4714]: I0129 16:09:55.762193 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:55 crc kubenswrapper[4714]: I0129 16:09:55.762224 4714 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:09:55 crc kubenswrapper[4714]: E0129 16:09:55.762674 4714 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.46:6443: connect: connection refused" node="crc" Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.075805 4714 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 16:09:56 crc kubenswrapper[4714]: E0129 16:09:56.076860 4714 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.122601 4714 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.124999 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 05:29:21.169865698 +0000 UTC Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.193523 4714 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c" exitCode=0 Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.193577 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c"} Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.193758 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.194946 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.194981 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.194993 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.196425 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.196955 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995"} Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.196985 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30"} Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.196993 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.196998 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2"} Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.197008 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835"} Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.197309 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.197337 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.197347 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.197996 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.198076 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.198097 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.199956 4714 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7" exitCode=0 Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.200006 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7"} Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.200149 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.201295 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.201328 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.201340 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.201367 4714 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="86530d623401c3cf5ebe44fcc1c2110a5f4dde059a5d27f74e118d5ca1df40dc" exitCode=0 Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.201427 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"86530d623401c3cf5ebe44fcc1c2110a5f4dde059a5d27f74e118d5ca1df40dc"} Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.201523 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.202410 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.202451 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.202466 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.203582 4714 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec" exitCode=0 Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.203627 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec"} Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.203659 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.204335 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.204359 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:56 crc kubenswrapper[4714]: I0129 16:09:56.204367 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:56 crc kubenswrapper[4714]: W0129 16:09:56.587847 4714 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 29 16:09:56 crc kubenswrapper[4714]: E0129 16:09:56.587974 4714 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:09:57 crc kubenswrapper[4714]: W0129 16:09:57.104463 4714 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 29 16:09:57 crc kubenswrapper[4714]: E0129 16:09:57.104546 4714 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.122088 4714 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.125402 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 18:08:51.746540599 +0000 UTC Jan 29 16:09:57 crc kubenswrapper[4714]: E0129 16:09:57.128156 4714 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="3.2s" Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.207047 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6565dc4c35c618a239bdc2be6dd45a9057e83573c92c3fc816350eb014c822c9"} Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.207085 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.207091 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6068173079b1d5cf7de92fe34bd0a4701b11a4dba7ae384f2d0e33f185656107"} Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.207192 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1f0bdc60ea5a5e188b2ed8894bb387084567b294c3a356fed01493d4bd6a7caf"} Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.208138 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.208212 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.208265 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.211538 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2"} Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.211629 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa"} Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.211732 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6"} Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.211791 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f"} Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.213087 4714 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c" exitCode=0 Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.213204 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c"} Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.213366 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.214023 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.214102 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.214165 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.215966 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.216371 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.216469 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"89cb2f8d441042c4c95e3cc056f991565c18bd93dcb0d61f3735e2451ff439a4"} Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.217136 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.217244 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.217283 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.217376 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.217404 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.217327 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.363250 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.364361 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.364389 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.364398 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:57 crc kubenswrapper[4714]: I0129 16:09:57.364419 4714 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:09:57 crc kubenswrapper[4714]: E0129 16:09:57.364811 4714 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.46:6443: connect: connection refused" node="crc" Jan 29 16:09:58 crc kubenswrapper[4714]: I0129 16:09:58.125632 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 01:08:43.355338261 +0000 UTC Jan 29 16:09:58 crc kubenswrapper[4714]: I0129 16:09:58.222052 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8"} Jan 29 16:09:58 crc kubenswrapper[4714]: I0129 16:09:58.222567 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:58 crc kubenswrapper[4714]: I0129 16:09:58.223974 4714 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334" exitCode=0 Jan 29 16:09:58 crc kubenswrapper[4714]: I0129 16:09:58.224107 4714 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:09:58 crc kubenswrapper[4714]: I0129 16:09:58.224162 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:58 crc kubenswrapper[4714]: I0129 16:09:58.224372 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:58 crc kubenswrapper[4714]: I0129 16:09:58.224537 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:58 crc kubenswrapper[4714]: I0129 16:09:58.224701 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:58 crc kubenswrapper[4714]: I0129 16:09:58.224948 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:58 crc kubenswrapper[4714]: I0129 16:09:58.224992 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:58 crc kubenswrapper[4714]: I0129 16:09:58.224857 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334"} Jan 29 16:09:58 crc kubenswrapper[4714]: I0129 16:09:58.225911 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:58 crc kubenswrapper[4714]: I0129 16:09:58.226046 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:58 crc kubenswrapper[4714]: I0129 16:09:58.226057 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:58 crc kubenswrapper[4714]: I0129 16:09:58.226068 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:58 crc kubenswrapper[4714]: I0129 16:09:58.226078 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:58 crc kubenswrapper[4714]: I0129 16:09:58.226075 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:58 crc kubenswrapper[4714]: I0129 16:09:58.226177 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:58 crc kubenswrapper[4714]: I0129 16:09:58.226173 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:58 crc kubenswrapper[4714]: I0129 16:09:58.226192 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:58 crc kubenswrapper[4714]: I0129 16:09:58.512668 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:09:58 crc kubenswrapper[4714]: I0129 16:09:58.512852 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:58 crc kubenswrapper[4714]: I0129 16:09:58.514484 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:58 crc kubenswrapper[4714]: I0129 16:09:58.514523 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:58 crc kubenswrapper[4714]: I0129 16:09:58.514535 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:58 crc kubenswrapper[4714]: I0129 16:09:58.521418 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:09:59 crc kubenswrapper[4714]: I0129 16:09:59.126742 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 17:16:10.736502391 +0000 UTC Jan 29 16:09:59 crc kubenswrapper[4714]: I0129 16:09:59.233620 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:59 crc kubenswrapper[4714]: I0129 16:09:59.233595 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346"} Jan 29 16:09:59 crc kubenswrapper[4714]: I0129 16:09:59.233799 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:09:59 crc kubenswrapper[4714]: I0129 16:09:59.233804 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9"} Jan 29 16:09:59 crc kubenswrapper[4714]: I0129 16:09:59.233948 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:09:59 crc kubenswrapper[4714]: I0129 16:09:59.233974 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02"} Jan 29 16:09:59 crc kubenswrapper[4714]: I0129 16:09:59.233992 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109"} Jan 29 16:09:59 crc kubenswrapper[4714]: I0129 16:09:59.235178 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:59 crc kubenswrapper[4714]: I0129 16:09:59.235209 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:09:59 crc kubenswrapper[4714]: I0129 16:09:59.235248 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:59 crc kubenswrapper[4714]: I0129 16:09:59.235266 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:09:59 crc kubenswrapper[4714]: I0129 16:09:59.235224 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:09:59 crc kubenswrapper[4714]: I0129 16:09:59.235331 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:00 crc kubenswrapper[4714]: I0129 16:10:00.127255 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 23:25:20.672932191 +0000 UTC Jan 29 16:10:00 crc kubenswrapper[4714]: I0129 16:10:00.177509 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:10:00 crc kubenswrapper[4714]: I0129 16:10:00.180683 4714 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 16:10:00 crc kubenswrapper[4714]: I0129 16:10:00.240173 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4"} Jan 29 16:10:00 crc kubenswrapper[4714]: I0129 16:10:00.240246 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:10:00 crc kubenswrapper[4714]: I0129 16:10:00.240278 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:10:00 crc kubenswrapper[4714]: I0129 16:10:00.240248 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:10:00 crc kubenswrapper[4714]: I0129 16:10:00.241337 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:00 crc kubenswrapper[4714]: I0129 16:10:00.241359 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:00 crc kubenswrapper[4714]: I0129 16:10:00.241369 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:00 crc kubenswrapper[4714]: I0129 16:10:00.241501 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:00 crc kubenswrapper[4714]: I0129 16:10:00.241536 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:00 crc kubenswrapper[4714]: I0129 16:10:00.241550 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:00 crc kubenswrapper[4714]: I0129 16:10:00.241705 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:00 crc kubenswrapper[4714]: I0129 16:10:00.241811 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:00 crc kubenswrapper[4714]: I0129 16:10:00.241904 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:00 crc kubenswrapper[4714]: I0129 16:10:00.565608 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:10:00 crc kubenswrapper[4714]: I0129 16:10:00.567075 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:00 crc kubenswrapper[4714]: I0129 16:10:00.567167 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:00 crc kubenswrapper[4714]: I0129 16:10:00.567231 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:00 crc kubenswrapper[4714]: I0129 16:10:00.567310 4714 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:10:01 crc kubenswrapper[4714]: I0129 16:10:01.127538 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 13:30:41.544701161 +0000 UTC Jan 29 16:10:01 crc kubenswrapper[4714]: I0129 16:10:01.243335 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:10:01 crc kubenswrapper[4714]: I0129 16:10:01.244335 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:01 crc kubenswrapper[4714]: I0129 16:10:01.244465 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:01 crc kubenswrapper[4714]: I0129 16:10:01.244580 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:01 crc kubenswrapper[4714]: I0129 16:10:01.393670 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:10:01 crc kubenswrapper[4714]: I0129 16:10:01.394337 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:10:01 crc kubenswrapper[4714]: I0129 16:10:01.396279 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:01 crc kubenswrapper[4714]: I0129 16:10:01.396333 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:01 crc kubenswrapper[4714]: I0129 16:10:01.396355 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:01 crc kubenswrapper[4714]: I0129 16:10:01.450512 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:10:01 crc kubenswrapper[4714]: I0129 16:10:01.450734 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:10:01 crc kubenswrapper[4714]: I0129 16:10:01.452402 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:01 crc kubenswrapper[4714]: I0129 16:10:01.452443 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:01 crc kubenswrapper[4714]: I0129 16:10:01.452456 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:01 crc kubenswrapper[4714]: I0129 16:10:01.886056 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:10:01 crc kubenswrapper[4714]: I0129 16:10:01.886343 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:10:01 crc kubenswrapper[4714]: I0129 16:10:01.887726 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:01 crc kubenswrapper[4714]: I0129 16:10:01.887767 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:01 crc kubenswrapper[4714]: I0129 16:10:01.887776 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:01 crc kubenswrapper[4714]: I0129 16:10:01.995987 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:10:02 crc kubenswrapper[4714]: I0129 16:10:02.128199 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 18:14:30.978735463 +0000 UTC Jan 29 16:10:02 crc kubenswrapper[4714]: I0129 16:10:02.246409 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:10:02 crc kubenswrapper[4714]: I0129 16:10:02.248307 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:02 crc kubenswrapper[4714]: I0129 16:10:02.248358 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:02 crc kubenswrapper[4714]: I0129 16:10:02.248374 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:02 crc kubenswrapper[4714]: I0129 16:10:02.373874 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 29 16:10:02 crc kubenswrapper[4714]: I0129 16:10:02.374138 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:10:02 crc kubenswrapper[4714]: I0129 16:10:02.375841 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:02 crc kubenswrapper[4714]: I0129 16:10:02.375913 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:02 crc kubenswrapper[4714]: I0129 16:10:02.375973 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:03 crc kubenswrapper[4714]: I0129 16:10:03.128524 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 16:58:38.912384011 +0000 UTC Jan 29 16:10:04 crc kubenswrapper[4714]: I0129 16:10:04.129264 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 21:25:48.803572412 +0000 UTC Jan 29 16:10:04 crc kubenswrapper[4714]: E0129 16:10:04.262659 4714 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 16:10:04 crc kubenswrapper[4714]: I0129 16:10:04.735399 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:10:04 crc kubenswrapper[4714]: I0129 16:10:04.735558 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:10:04 crc kubenswrapper[4714]: I0129 16:10:04.736701 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:04 crc kubenswrapper[4714]: I0129 16:10:04.736741 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:04 crc kubenswrapper[4714]: I0129 16:10:04.736759 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:05 crc kubenswrapper[4714]: I0129 16:10:05.129629 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:23:47.839923433 +0000 UTC Jan 29 16:10:06 crc kubenswrapper[4714]: I0129 16:10:06.129998 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 00:25:23.73045319 +0000 UTC Jan 29 16:10:06 crc kubenswrapper[4714]: I0129 16:10:06.441560 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 29 16:10:06 crc kubenswrapper[4714]: I0129 16:10:06.441714 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:10:06 crc kubenswrapper[4714]: I0129 16:10:06.442975 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:06 crc kubenswrapper[4714]: I0129 16:10:06.443000 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:06 crc kubenswrapper[4714]: I0129 16:10:06.443008 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:07 crc kubenswrapper[4714]: I0129 16:10:07.130440 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 04:39:08.67083275 +0000 UTC Jan 29 16:10:07 crc kubenswrapper[4714]: I0129 16:10:07.736235 4714 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 16:10:07 crc kubenswrapper[4714]: I0129 16:10:07.736339 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 16:10:08 crc kubenswrapper[4714]: I0129 16:10:08.122417 4714 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 29 16:10:08 crc kubenswrapper[4714]: I0129 16:10:08.130902 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 19:57:14.757433164 +0000 UTC Jan 29 16:10:08 crc kubenswrapper[4714]: W0129 16:10:08.174275 4714 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 29 16:10:08 crc kubenswrapper[4714]: I0129 16:10:08.174384 4714 trace.go:236] Trace[1706721841]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 16:09:58.172) (total time: 10001ms): Jan 29 16:10:08 crc kubenswrapper[4714]: Trace[1706721841]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:10:08.174) Jan 29 16:10:08 crc kubenswrapper[4714]: Trace[1706721841]: [10.001636696s] [10.001636696s] END Jan 29 16:10:08 crc kubenswrapper[4714]: E0129 16:10:08.174431 4714 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 29 16:10:08 crc kubenswrapper[4714]: I0129 16:10:08.240368 4714 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46538->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 29 16:10:08 crc kubenswrapper[4714]: I0129 16:10:08.240464 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46538->192.168.126.11:17697: read: connection reset by peer" Jan 29 16:10:08 crc kubenswrapper[4714]: W0129 16:10:08.508717 4714 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 29 16:10:08 crc kubenswrapper[4714]: I0129 16:10:08.508851 4714 trace.go:236] Trace[1900129430]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 16:09:58.507) (total time: 10001ms): Jan 29 16:10:08 crc kubenswrapper[4714]: Trace[1900129430]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:10:08.508) Jan 29 16:10:08 crc kubenswrapper[4714]: Trace[1900129430]: [10.001248415s] [10.001248415s] END Jan 29 16:10:08 crc kubenswrapper[4714]: E0129 16:10:08.508903 4714 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 29 16:10:09 crc kubenswrapper[4714]: I0129 16:10:09.131391 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 02:11:31.172349292 +0000 UTC Jan 29 16:10:09 crc kubenswrapper[4714]: I0129 16:10:09.267394 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 16:10:09 crc kubenswrapper[4714]: I0129 16:10:09.270604 4714 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8" exitCode=255 Jan 29 16:10:09 crc kubenswrapper[4714]: I0129 16:10:09.270667 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8"} Jan 29 16:10:09 crc kubenswrapper[4714]: I0129 16:10:09.270915 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:10:09 crc kubenswrapper[4714]: I0129 16:10:09.272259 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:09 crc kubenswrapper[4714]: I0129 16:10:09.272336 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:09 crc kubenswrapper[4714]: I0129 16:10:09.272356 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:09 crc kubenswrapper[4714]: I0129 16:10:09.273402 4714 scope.go:117] "RemoveContainer" containerID="6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8" Jan 29 16:10:09 crc kubenswrapper[4714]: I0129 16:10:09.373467 4714 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 16:10:09 crc kubenswrapper[4714]: I0129 16:10:09.373540 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 16:10:09 crc kubenswrapper[4714]: I0129 16:10:09.379763 4714 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 16:10:09 crc kubenswrapper[4714]: I0129 16:10:09.379828 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 16:10:10 crc kubenswrapper[4714]: I0129 16:10:10.131858 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 02:09:58.935019265 +0000 UTC Jan 29 16:10:10 crc kubenswrapper[4714]: I0129 16:10:10.184500 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:10:10 crc kubenswrapper[4714]: I0129 16:10:10.184706 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:10:10 crc kubenswrapper[4714]: I0129 16:10:10.186214 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:10 crc kubenswrapper[4714]: I0129 16:10:10.186276 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:10 crc kubenswrapper[4714]: I0129 16:10:10.186301 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:10 crc kubenswrapper[4714]: I0129 16:10:10.279334 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 16:10:10 crc kubenswrapper[4714]: I0129 16:10:10.282468 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79"} Jan 29 16:10:10 crc kubenswrapper[4714]: I0129 16:10:10.282708 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:10:10 crc kubenswrapper[4714]: I0129 16:10:10.283873 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:10 crc kubenswrapper[4714]: I0129 16:10:10.283949 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:10 crc kubenswrapper[4714]: I0129 16:10:10.283975 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:11 crc kubenswrapper[4714]: I0129 16:10:11.132694 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 11:45:14.889820284 +0000 UTC Jan 29 16:10:12 crc kubenswrapper[4714]: I0129 16:10:12.003538 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:10:12 crc kubenswrapper[4714]: I0129 16:10:12.003817 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:10:12 crc kubenswrapper[4714]: I0129 16:10:12.003988 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:10:12 crc kubenswrapper[4714]: I0129 16:10:12.005519 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:12 crc kubenswrapper[4714]: I0129 16:10:12.005578 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:12 crc kubenswrapper[4714]: I0129 16:10:12.005598 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:12 crc kubenswrapper[4714]: I0129 16:10:12.013150 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:10:12 crc kubenswrapper[4714]: I0129 16:10:12.133240 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 06:55:36.331551379 +0000 UTC Jan 29 16:10:12 crc kubenswrapper[4714]: I0129 16:10:12.288204 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:10:12 crc kubenswrapper[4714]: I0129 16:10:12.289084 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:12 crc kubenswrapper[4714]: I0129 16:10:12.289127 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:12 crc kubenswrapper[4714]: I0129 16:10:12.289145 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:12 crc kubenswrapper[4714]: I0129 16:10:12.581157 4714 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.121895 4714 apiserver.go:52] "Watching apiserver" Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.133738 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 22:55:27.025353911 +0000 UTC Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.139409 4714 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.139901 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.140505 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.140690 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:13 crc kubenswrapper[4714]: E0129 16:10:13.140870 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.141037 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.141048 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.141244 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:13 crc kubenswrapper[4714]: E0129 16:10:13.141289 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.141681 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:13 crc kubenswrapper[4714]: E0129 16:10:13.141770 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.142978 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.144208 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.144422 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.144457 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.145049 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.145594 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.145838 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.145877 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.146308 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.173663 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.194594 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.207695 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.221275 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.225827 4714 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.232647 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.244586 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.261950 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:13 crc kubenswrapper[4714]: I0129 16:10:13.308185 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.003334 4714 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.134974 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 03:57:01.551416911 +0000 UTC Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.204258 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.216913 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.227996 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.239116 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.249755 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.266066 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.277096 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:14 crc kubenswrapper[4714]: E0129 16:10:14.373810 4714 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 29 16:10:14 crc kubenswrapper[4714]: E0129 16:10:14.376463 4714 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.376733 4714 trace.go:236] Trace[2076281891]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 16:10:01.075) (total time: 13301ms): Jan 29 16:10:14 crc kubenswrapper[4714]: Trace[2076281891]: ---"Objects listed" error: 13301ms (16:10:14.376) Jan 29 16:10:14 crc kubenswrapper[4714]: Trace[2076281891]: [13.301400826s] [13.301400826s] END Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.377171 4714 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.378599 4714 trace.go:236] Trace[298477415]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 16:10:01.047) (total time: 13331ms): Jan 29 16:10:14 crc kubenswrapper[4714]: Trace[298477415]: ---"Objects listed" error: 13331ms (16:10:14.378) Jan 29 16:10:14 crc kubenswrapper[4714]: Trace[298477415]: [13.3310769s] [13.3310769s] END Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.378900 4714 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.378658 4714 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.387959 4714 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.410702 4714 csr.go:261] certificate signing request csr-2pk7r is approved, waiting to be issued Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.420149 4714 csr.go:257] certificate signing request csr-2pk7r is issued Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.479734 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.480268 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.480470 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.481263 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.482794 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.483445 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.484398 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.485025 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.485485 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.485590 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.486098 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.486209 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.486311 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.486401 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.480738 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.486494 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.486486 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.480746 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.481177 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.486601 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.482707 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.483355 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.486638 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.486670 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.486695 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.486721 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.486745 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.484320 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.484971 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.485418 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.486155 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.486876 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.486884 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.486907 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.487401 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.487499 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.487542 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.487614 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.487110 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.487129 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.487146 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.487173 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.487295 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.487438 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.487439 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.487466 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.487744 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.487796 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.487813 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.487839 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.487824 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.487863 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.487952 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.487979 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488001 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488035 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488066 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488087 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488097 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488131 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488160 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488165 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488191 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488226 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488258 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488288 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488306 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488319 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488328 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488351 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488410 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488471 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488508 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488542 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488566 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488576 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488614 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488650 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488683 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488716 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488752 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488782 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488814 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488861 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488618 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488896 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488954 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488994 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.489030 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.489066 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.489096 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488889 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488660 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488652 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.489156 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.489167 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488794 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.489189 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.489205 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488832 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488877 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488962 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.488990 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.489096 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.489355 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.489407 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.489387 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.489423 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.489432 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.489598 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.489618 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.489714 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.489748 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.489134 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.489774 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.489808 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.489839 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.489863 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.489886 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.489909 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.490093 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.490701 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.490800 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.490843 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.490882 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.490928 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.490998 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.491039 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.491080 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.491117 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.491169 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.491205 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.491242 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.491248 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.491281 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.491319 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.491361 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.491400 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.491439 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.491476 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.491514 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.491548 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.491586 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.491749 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.491787 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.491826 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.491860 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.491870 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.491910 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.491976 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.492025 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.492061 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.492096 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.492132 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.492170 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.492206 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.492246 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.492288 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.492324 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.492358 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.492425 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.492461 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.492495 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.492529 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.492622 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.492657 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.492690 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.492723 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.492787 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.492827 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.492861 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.492901 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.492960 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.493005 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.493042 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.493079 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.493115 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.493153 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.493188 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.493269 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.493307 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.493344 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.493391 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.493447 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.493487 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.493526 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.493562 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.493600 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.493643 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.493695 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.493739 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.493784 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.493824 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.493863 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.493900 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.493965 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494003 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494040 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494077 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494112 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494147 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494183 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494220 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494276 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494311 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494347 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494385 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494421 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494465 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494502 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494539 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494577 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494617 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494656 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494694 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494732 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494773 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494814 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494853 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494889 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494927 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494986 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.495031 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.495067 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.495102 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.495139 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.495172 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.495208 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.495243 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.495278 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.495313 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.495349 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.495384 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.495422 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.495458 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.495493 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.495535 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.495576 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.495613 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.495651 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.495687 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.495727 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.495764 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.495800 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.495836 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.495878 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.495918 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.495987 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.496032 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.496069 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.496103 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.496204 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.496273 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.496367 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.496423 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.496473 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.496530 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.496570 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.496615 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.496659 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.496697 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.496749 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.496787 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.496833 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.496877 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501105 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501168 4714 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501206 4714 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501237 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501271 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501301 4714 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501323 4714 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501348 4714 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501377 4714 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501407 4714 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501447 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501472 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501493 4714 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501513 4714 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501534 4714 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501561 4714 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501590 4714 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501617 4714 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501645 4714 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501673 4714 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501693 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501713 4714 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501736 4714 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501755 4714 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501794 4714 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501819 4714 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501838 4714 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501859 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501881 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501903 4714 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501924 4714 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501974 4714 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501996 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.502017 4714 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.502036 4714 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.502061 4714 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.502091 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.502122 4714 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.502150 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.502174 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.502197 4714 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.502227 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.502256 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.502281 4714 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.502304 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.502328 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.502359 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.502384 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.502404 4714 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.502423 4714 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.502444 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.502464 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.502483 4714 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.502503 4714 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.502522 4714 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.502542 4714 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.492529 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.493633 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494129 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494408 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.494862 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.495216 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501107 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.501427 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.503922 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.505341 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.506349 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.507004 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.507568 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.507754 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.507828 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.508056 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.508246 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.508320 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.508434 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.508606 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.508876 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: E0129 16:10:14.508908 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:10:15.008886106 +0000 UTC m=+21.529387226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.510809 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.510921 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.514377 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.517337 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.517353 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.517620 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.517622 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.517973 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.518201 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.518192 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.518289 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.518477 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.518631 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.518866 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.519229 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.519325 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.519553 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.519699 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.520285 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.520612 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.520917 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.520924 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.520961 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.511175 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.526372 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.526555 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.526726 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.528331 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.530189 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.530563 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.531590 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.531907 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.532342 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.532920 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.533125 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.533560 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.533666 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.533846 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.533871 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.533903 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.533960 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.534197 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.534285 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.534401 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.534428 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.534780 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.535276 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.535413 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.535782 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.538606 4714 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.542034 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.544309 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.544570 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.544799 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.544820 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.534518 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.547865 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.548056 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.548625 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.553560 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.554505 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.555052 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.555537 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.555531 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.555773 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.555787 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.556152 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.556217 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.556282 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.556341 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.556792 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.557872 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: E0129 16:10:14.559004 4714 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:10:14 crc kubenswrapper[4714]: E0129 16:10:14.559027 4714 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:10:14 crc kubenswrapper[4714]: E0129 16:10:14.559078 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:15.059058743 +0000 UTC m=+21.579559863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:10:14 crc kubenswrapper[4714]: E0129 16:10:14.559162 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:15.059129505 +0000 UTC m=+21.579630635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.560602 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:10:14 crc kubenswrapper[4714]: E0129 16:10:14.563109 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:10:14 crc kubenswrapper[4714]: E0129 16:10:14.563139 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:10:14 crc kubenswrapper[4714]: E0129 16:10:14.563160 4714 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:14 crc kubenswrapper[4714]: E0129 16:10:14.563225 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:15.063204084 +0000 UTC m=+21.583705204 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.564840 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.565557 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.565851 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.568906 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.569113 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.569250 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.569503 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.569520 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.570387 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.570437 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.570606 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.570606 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.570683 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.570451 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.570792 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.570800 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.571071 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.570957 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.574345 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.580225 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:10:14 crc kubenswrapper[4714]: E0129 16:10:14.580588 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:10:14 crc kubenswrapper[4714]: E0129 16:10:14.580608 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:10:14 crc kubenswrapper[4714]: E0129 16:10:14.580623 4714 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:14 crc kubenswrapper[4714]: E0129 16:10:14.580681 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:15.080658904 +0000 UTC m=+21.601160024 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.590160 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.596290 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.596745 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.601214 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.601628 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.601744 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.603952 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.603982 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604051 4714 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604068 4714 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604077 4714 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604086 4714 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604096 4714 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604104 4714 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604112 4714 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604122 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604132 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604140 4714 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604149 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604158 4714 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604166 4714 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604174 4714 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604182 4714 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604191 4714 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604200 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604208 4714 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604216 4714 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604224 4714 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604233 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604241 4714 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604219 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604249 4714 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604287 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604319 4714 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604338 4714 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604358 4714 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604371 4714 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604383 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604376 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604396 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604458 4714 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604475 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.604280 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.605762 4714 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.605794 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.605807 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.605818 4714 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.605828 4714 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.605839 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.605850 4714 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.605862 4714 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.605873 4714 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.605882 4714 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.605893 4714 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.605903 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.605914 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.605923 4714 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.605949 4714 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.605959 4714 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.605968 4714 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.605977 4714 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.605987 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.605997 4714 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606005 4714 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606015 4714 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606031 4714 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606041 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606049 4714 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606058 4714 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606069 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606080 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606090 4714 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606099 4714 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606109 4714 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606119 4714 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606131 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606141 4714 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606151 4714 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606161 4714 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606171 4714 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606182 4714 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606191 4714 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606199 4714 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606208 4714 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606218 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606226 4714 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606235 4714 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606245 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606253 4714 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606262 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606279 4714 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606287 4714 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606296 4714 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606304 4714 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606313 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606323 4714 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606333 4714 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606341 4714 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606351 4714 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606361 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606370 4714 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606378 4714 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606387 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606397 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606405 4714 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606414 4714 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606424 4714 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606443 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606453 4714 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606462 4714 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606470 4714 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606479 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606488 4714 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606497 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606507 4714 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606516 4714 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606525 4714 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606533 4714 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606542 4714 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606551 4714 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606559 4714 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606568 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606577 4714 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606587 4714 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606597 4714 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606606 4714 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.606615 4714 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.611664 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.611978 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.612174 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.612226 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.612266 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.612591 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.612633 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.612840 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.613231 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.614396 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.614433 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.614944 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.615227 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.615736 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.615762 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.619390 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.621181 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.630449 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.633058 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.633061 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.633353 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.633426 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.647729 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.656074 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.656289 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.667149 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.677479 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.677984 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.709023 4714 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.709139 4714 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.709472 4714 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.709506 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.709518 4714 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.709530 4714 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.709540 4714 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.709549 4714 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.709560 4714 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.709570 4714 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.709581 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.709590 4714 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.709598 4714 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.709608 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.709626 4714 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.709635 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.709644 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.709652 4714 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.709660 4714 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.709669 4714 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.709677 4714 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.709686 4714 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.709696 4714 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.744789 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.751097 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.764534 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.781183 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.798390 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.808953 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.811865 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.821188 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.832805 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.840827 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.850537 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.861865 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.876393 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.886631 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.908231 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.918868 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.928917 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:14 crc kubenswrapper[4714]: I0129 16:10:14.939775 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.012200 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:10:15 crc kubenswrapper[4714]: E0129 16:10:15.012418 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:10:16.012395443 +0000 UTC m=+22.532896553 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.113023 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.113083 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.113120 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.113149 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:15 crc kubenswrapper[4714]: E0129 16:10:15.113281 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:10:15 crc kubenswrapper[4714]: E0129 16:10:15.113282 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:10:15 crc kubenswrapper[4714]: E0129 16:10:15.113287 4714 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:10:15 crc kubenswrapper[4714]: E0129 16:10:15.113346 4714 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:10:15 crc kubenswrapper[4714]: E0129 16:10:15.113299 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:10:15 crc kubenswrapper[4714]: E0129 16:10:15.113447 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:16.113419686 +0000 UTC m=+22.633920806 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:10:15 crc kubenswrapper[4714]: E0129 16:10:15.113459 4714 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:15 crc kubenswrapper[4714]: E0129 16:10:15.113472 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:16.113463027 +0000 UTC m=+22.633964147 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:10:15 crc kubenswrapper[4714]: E0129 16:10:15.113524 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:16.113484628 +0000 UTC m=+22.633985768 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:15 crc kubenswrapper[4714]: E0129 16:10:15.113323 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:10:15 crc kubenswrapper[4714]: E0129 16:10:15.113547 4714 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:15 crc kubenswrapper[4714]: E0129 16:10:15.113579 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:16.11357028 +0000 UTC m=+22.634071420 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.135477 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 13:46:38.508414533 +0000 UTC Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.184045 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.184107 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:15 crc kubenswrapper[4714]: E0129 16:10:15.184267 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.184406 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:15 crc kubenswrapper[4714]: E0129 16:10:15.184412 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:10:15 crc kubenswrapper[4714]: E0129 16:10:15.184740 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.296801 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d"} Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.297251 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315"} Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.297323 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a3e46ca0032e7c46b49567152df4cab88b64467e2571d89c9955362b3c5a5f43"} Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.298031 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b"} Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.298097 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"86da9d03ae94b77bfb3371663564750484db9e0378cce10cfc3e10ab16a69bc0"} Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.298837 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a974d693556b28ba1f5a38ab8b3e14c0e5472255f8c6b3e4d29612bdb5151887"} Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.325029 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.343628 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.384453 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.413274 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.421965 4714 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-29 16:05:14 +0000 UTC, rotation deadline is 2026-11-17 06:34:21.676761944 +0000 UTC Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.422030 4714 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6998h24m6.254735008s for next certificate rotation Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.447589 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.467501 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.479858 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.490612 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.505108 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.519355 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.534834 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.549859 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.564393 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.581958 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.598878 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.612150 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.774341 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-46dqc"] Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.774749 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-46dqc" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.778361 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-ppngk"] Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.778655 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.779049 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.779128 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.779331 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.782407 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.782596 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.783981 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.784652 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.786468 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.799120 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.814341 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.830973 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.846503 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.864842 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.877546 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.895878 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.917567 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.920017 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f846b283-5468-4014-ba05-da5bfffa2ebc-hosts-file\") pod \"node-resolver-46dqc\" (UID: \"f846b283-5468-4014-ba05-da5bfffa2ebc\") " pod="openshift-dns/node-resolver-46dqc" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.920151 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8c765f3-89eb-4077-8829-03e86eb0c90c-mcd-auth-proxy-config\") pod \"machine-config-daemon-ppngk\" (UID: \"c8c765f3-89eb-4077-8829-03e86eb0c90c\") " pod="openshift-machine-config-operator/machine-config-daemon-ppngk" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.920203 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8c765f3-89eb-4077-8829-03e86eb0c90c-proxy-tls\") pod \"machine-config-daemon-ppngk\" (UID: \"c8c765f3-89eb-4077-8829-03e86eb0c90c\") " pod="openshift-machine-config-operator/machine-config-daemon-ppngk" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.920238 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsqf5\" (UniqueName: \"kubernetes.io/projected/c8c765f3-89eb-4077-8829-03e86eb0c90c-kube-api-access-bsqf5\") pod \"machine-config-daemon-ppngk\" (UID: \"c8c765f3-89eb-4077-8829-03e86eb0c90c\") " pod="openshift-machine-config-operator/machine-config-daemon-ppngk" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.920270 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbd9b\" (UniqueName: \"kubernetes.io/projected/f846b283-5468-4014-ba05-da5bfffa2ebc-kube-api-access-gbd9b\") pod \"node-resolver-46dqc\" (UID: \"f846b283-5468-4014-ba05-da5bfffa2ebc\") " pod="openshift-dns/node-resolver-46dqc" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.920300 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c8c765f3-89eb-4077-8829-03e86eb0c90c-rootfs\") pod \"machine-config-daemon-ppngk\" (UID: \"c8c765f3-89eb-4077-8829-03e86eb0c90c\") " pod="openshift-machine-config-operator/machine-config-daemon-ppngk" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.932100 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.949490 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.965118 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.979058 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:15 crc kubenswrapper[4714]: I0129 16:10:15.994781 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.012038 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.021267 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:10:16 crc kubenswrapper[4714]: E0129 16:10:16.021534 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:10:18.021494368 +0000 UTC m=+24.541995528 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.021814 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8c765f3-89eb-4077-8829-03e86eb0c90c-proxy-tls\") pod \"machine-config-daemon-ppngk\" (UID: \"c8c765f3-89eb-4077-8829-03e86eb0c90c\") " pod="openshift-machine-config-operator/machine-config-daemon-ppngk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.022071 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsqf5\" (UniqueName: \"kubernetes.io/projected/c8c765f3-89eb-4077-8829-03e86eb0c90c-kube-api-access-bsqf5\") pod \"machine-config-daemon-ppngk\" (UID: \"c8c765f3-89eb-4077-8829-03e86eb0c90c\") " pod="openshift-machine-config-operator/machine-config-daemon-ppngk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.022302 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbd9b\" (UniqueName: \"kubernetes.io/projected/f846b283-5468-4014-ba05-da5bfffa2ebc-kube-api-access-gbd9b\") pod \"node-resolver-46dqc\" (UID: \"f846b283-5468-4014-ba05-da5bfffa2ebc\") " pod="openshift-dns/node-resolver-46dqc" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.022475 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c8c765f3-89eb-4077-8829-03e86eb0c90c-rootfs\") pod \"machine-config-daemon-ppngk\" (UID: \"c8c765f3-89eb-4077-8829-03e86eb0c90c\") " pod="openshift-machine-config-operator/machine-config-daemon-ppngk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.022630 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f846b283-5468-4014-ba05-da5bfffa2ebc-hosts-file\") pod \"node-resolver-46dqc\" (UID: \"f846b283-5468-4014-ba05-da5bfffa2ebc\") " pod="openshift-dns/node-resolver-46dqc" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.022735 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f846b283-5468-4014-ba05-da5bfffa2ebc-hosts-file\") pod \"node-resolver-46dqc\" (UID: \"f846b283-5468-4014-ba05-da5bfffa2ebc\") " pod="openshift-dns/node-resolver-46dqc" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.022544 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c8c765f3-89eb-4077-8829-03e86eb0c90c-rootfs\") pod \"machine-config-daemon-ppngk\" (UID: \"c8c765f3-89eb-4077-8829-03e86eb0c90c\") " pod="openshift-machine-config-operator/machine-config-daemon-ppngk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.023243 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8c765f3-89eb-4077-8829-03e86eb0c90c-mcd-auth-proxy-config\") pod \"machine-config-daemon-ppngk\" (UID: \"c8c765f3-89eb-4077-8829-03e86eb0c90c\") " pod="openshift-machine-config-operator/machine-config-daemon-ppngk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.024096 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8c765f3-89eb-4077-8829-03e86eb0c90c-mcd-auth-proxy-config\") pod \"machine-config-daemon-ppngk\" (UID: \"c8c765f3-89eb-4077-8829-03e86eb0c90c\") " pod="openshift-machine-config-operator/machine-config-daemon-ppngk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.031663 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.050245 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.063809 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.066951 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbd9b\" (UniqueName: \"kubernetes.io/projected/f846b283-5468-4014-ba05-da5bfffa2ebc-kube-api-access-gbd9b\") pod \"node-resolver-46dqc\" (UID: \"f846b283-5468-4014-ba05-da5bfffa2ebc\") " pod="openshift-dns/node-resolver-46dqc" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.066902 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8c765f3-89eb-4077-8829-03e86eb0c90c-proxy-tls\") pod \"machine-config-daemon-ppngk\" (UID: \"c8c765f3-89eb-4077-8829-03e86eb0c90c\") " pod="openshift-machine-config-operator/machine-config-daemon-ppngk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.068401 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsqf5\" (UniqueName: \"kubernetes.io/projected/c8c765f3-89eb-4077-8829-03e86eb0c90c-kube-api-access-bsqf5\") pod \"machine-config-daemon-ppngk\" (UID: \"c8c765f3-89eb-4077-8829-03e86eb0c90c\") " pod="openshift-machine-config-operator/machine-config-daemon-ppngk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.073553 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.087765 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.091748 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-46dqc" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.100160 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" Jan 29 16:10:16 crc kubenswrapper[4714]: W0129 16:10:16.101858 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf846b283_5468_4014_ba05_da5bfffa2ebc.slice/crio-d040290de4b9cff5da54cae43fa182eb438f5fb90c018e6aad54296b802f951c WatchSource:0}: Error finding container d040290de4b9cff5da54cae43fa182eb438f5fb90c018e6aad54296b802f951c: Status 404 returned error can't find the container with id d040290de4b9cff5da54cae43fa182eb438f5fb90c018e6aad54296b802f951c Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.125831 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:16 crc kubenswrapper[4714]: E0129 16:10:16.126073 4714 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:10:16 crc kubenswrapper[4714]: E0129 16:10:16.126167 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:18.126144867 +0000 UTC m=+24.646645987 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.126180 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.126310 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:16 crc kubenswrapper[4714]: E0129 16:10:16.126412 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.126428 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:16 crc kubenswrapper[4714]: E0129 16:10:16.126441 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:10:16 crc kubenswrapper[4714]: E0129 16:10:16.126518 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:10:16 crc kubenswrapper[4714]: E0129 16:10:16.126540 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:10:16 crc kubenswrapper[4714]: E0129 16:10:16.126541 4714 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:16 crc kubenswrapper[4714]: E0129 16:10:16.126554 4714 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:16 crc kubenswrapper[4714]: E0129 16:10:16.126606 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:18.12659419 +0000 UTC m=+24.647095310 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:16 crc kubenswrapper[4714]: E0129 16:10:16.126672 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:18.126640681 +0000 UTC m=+24.647141841 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:16 crc kubenswrapper[4714]: E0129 16:10:16.126678 4714 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:10:16 crc kubenswrapper[4714]: E0129 16:10:16.126757 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:18.126743134 +0000 UTC m=+24.647244294 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.135684 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 04:32:59.299196716 +0000 UTC Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.168096 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-2cfxk"] Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.169640 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.173024 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-b2ttm"] Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.173642 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.174824 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.175439 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.175880 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.176139 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.176327 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.176760 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.177375 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.194984 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.195959 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.196604 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.199145 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.202586 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.203697 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.205437 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.206198 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.207752 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.208543 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.209229 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.210497 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.211678 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.213918 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.215395 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.216332 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.217002 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.218154 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.220334 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.221323 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.222004 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.223503 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.224095 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.224771 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.225798 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.226599 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.227794 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.229076 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.231134 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.231349 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.231887 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.233961 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.234756 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.236092 4714 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.236733 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.239193 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.239928 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.240991 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.242572 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.243441 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.244090 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.244383 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.245444 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.246581 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.247123 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.248261 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.249041 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.250132 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.250673 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.251770 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.252423 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.254498 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.255516 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.255684 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.256406 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.258071 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.259185 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.260688 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.261373 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.267967 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.281553 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.296391 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.302290 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" event={"ID":"c8c765f3-89eb-4077-8829-03e86eb0c90c","Type":"ContainerStarted","Data":"27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5"} Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.302329 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" event={"ID":"c8c765f3-89eb-4077-8829-03e86eb0c90c","Type":"ContainerStarted","Data":"4b5e0d6b66078d9a0f57472436f75eb3078eba951da5242e7a34f9bb0dab7f27"} Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.302859 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-46dqc" event={"ID":"f846b283-5468-4014-ba05-da5bfffa2ebc","Type":"ContainerStarted","Data":"d040290de4b9cff5da54cae43fa182eb438f5fb90c018e6aad54296b802f951c"} Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.310469 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.325727 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.331360 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8b20fd8d-1ebb-47d0-8676-403b99dac1ec-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2cfxk\" (UID: \"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\") " pod="openshift-multus/multus-additional-cni-plugins-2cfxk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.331466 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-multus-socket-dir-parent\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.331517 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-cnibin\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.331553 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-hostroot\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.331977 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8b20fd8d-1ebb-47d0-8676-403b99dac1ec-cni-binary-copy\") pod \"multus-additional-cni-plugins-2cfxk\" (UID: \"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\") " pod="openshift-multus/multus-additional-cni-plugins-2cfxk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.332136 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8b20fd8d-1ebb-47d0-8676-403b99dac1ec-cnibin\") pod \"multus-additional-cni-plugins-2cfxk\" (UID: \"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\") " pod="openshift-multus/multus-additional-cni-plugins-2cfxk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.332267 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-host-run-netns\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.332421 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/89560008-8bdc-4640-af11-681d825e69d4-cni-binary-copy\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.332466 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-system-cni-dir\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.332521 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-multus-cni-dir\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.332589 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/89560008-8bdc-4640-af11-681d825e69d4-multus-daemon-config\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.332638 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkc7b\" (UniqueName: \"kubernetes.io/projected/8b20fd8d-1ebb-47d0-8676-403b99dac1ec-kube-api-access-vkc7b\") pod \"multus-additional-cni-plugins-2cfxk\" (UID: \"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\") " pod="openshift-multus/multus-additional-cni-plugins-2cfxk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.334438 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-os-release\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.334514 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b20fd8d-1ebb-47d0-8676-403b99dac1ec-system-cni-dir\") pod \"multus-additional-cni-plugins-2cfxk\" (UID: \"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\") " pod="openshift-multus/multus-additional-cni-plugins-2cfxk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.334543 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8b20fd8d-1ebb-47d0-8676-403b99dac1ec-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2cfxk\" (UID: \"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\") " pod="openshift-multus/multus-additional-cni-plugins-2cfxk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.334569 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-multus-conf-dir\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.334596 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8b20fd8d-1ebb-47d0-8676-403b99dac1ec-os-release\") pod \"multus-additional-cni-plugins-2cfxk\" (UID: \"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\") " pod="openshift-multus/multus-additional-cni-plugins-2cfxk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.334622 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp6mh\" (UniqueName: \"kubernetes.io/projected/89560008-8bdc-4640-af11-681d825e69d4-kube-api-access-dp6mh\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.334647 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-host-run-k8s-cni-cncf-io\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.334672 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-host-var-lib-kubelet\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.334694 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-etc-kubernetes\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.334720 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-host-var-lib-cni-bin\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.334741 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-host-var-lib-cni-multus\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.334763 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-host-run-multus-certs\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.339674 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.353332 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.370319 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.389035 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.405866 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.420625 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.435329 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.435454 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8b20fd8d-1ebb-47d0-8676-403b99dac1ec-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2cfxk\" (UID: \"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\") " pod="openshift-multus/multus-additional-cni-plugins-2cfxk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.435494 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-multus-socket-dir-parent\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.435522 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-cnibin\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.435541 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-hostroot\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.435564 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8b20fd8d-1ebb-47d0-8676-403b99dac1ec-cni-binary-copy\") pod \"multus-additional-cni-plugins-2cfxk\" (UID: \"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\") " pod="openshift-multus/multus-additional-cni-plugins-2cfxk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.435582 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8b20fd8d-1ebb-47d0-8676-403b99dac1ec-cnibin\") pod \"multus-additional-cni-plugins-2cfxk\" (UID: \"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\") " pod="openshift-multus/multus-additional-cni-plugins-2cfxk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.435640 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-hostroot\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.435651 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-cnibin\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.435734 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-multus-socket-dir-parent\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.435763 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8b20fd8d-1ebb-47d0-8676-403b99dac1ec-cnibin\") pod \"multus-additional-cni-plugins-2cfxk\" (UID: \"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\") " pod="openshift-multus/multus-additional-cni-plugins-2cfxk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.436235 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8b20fd8d-1ebb-47d0-8676-403b99dac1ec-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2cfxk\" (UID: \"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\") " pod="openshift-multus/multus-additional-cni-plugins-2cfxk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.436260 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8b20fd8d-1ebb-47d0-8676-403b99dac1ec-cni-binary-copy\") pod \"multus-additional-cni-plugins-2cfxk\" (UID: \"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\") " pod="openshift-multus/multus-additional-cni-plugins-2cfxk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.436320 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-host-run-netns\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.436339 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/89560008-8bdc-4640-af11-681d825e69d4-cni-binary-copy\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.436394 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-host-run-netns\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.436356 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkc7b\" (UniqueName: \"kubernetes.io/projected/8b20fd8d-1ebb-47d0-8676-403b99dac1ec-kube-api-access-vkc7b\") pod \"multus-additional-cni-plugins-2cfxk\" (UID: \"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\") " pod="openshift-multus/multus-additional-cni-plugins-2cfxk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.436447 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-system-cni-dir\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.436716 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-system-cni-dir\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.436751 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-multus-cni-dir\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.436768 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/89560008-8bdc-4640-af11-681d825e69d4-multus-daemon-config\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.436831 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/89560008-8bdc-4640-af11-681d825e69d4-cni-binary-copy\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.436904 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-multus-cni-dir\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.437090 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-os-release\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.436784 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-os-release\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.437139 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b20fd8d-1ebb-47d0-8676-403b99dac1ec-system-cni-dir\") pod \"multus-additional-cni-plugins-2cfxk\" (UID: \"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\") " pod="openshift-multus/multus-additional-cni-plugins-2cfxk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.437159 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8b20fd8d-1ebb-47d0-8676-403b99dac1ec-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2cfxk\" (UID: \"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\") " pod="openshift-multus/multus-additional-cni-plugins-2cfxk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.437230 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b20fd8d-1ebb-47d0-8676-403b99dac1ec-system-cni-dir\") pod \"multus-additional-cni-plugins-2cfxk\" (UID: \"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\") " pod="openshift-multus/multus-additional-cni-plugins-2cfxk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.437236 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/89560008-8bdc-4640-af11-681d825e69d4-multus-daemon-config\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.437175 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-multus-conf-dir\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.437291 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8b20fd8d-1ebb-47d0-8676-403b99dac1ec-os-release\") pod \"multus-additional-cni-plugins-2cfxk\" (UID: \"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\") " pod="openshift-multus/multus-additional-cni-plugins-2cfxk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.437307 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-host-run-k8s-cni-cncf-io\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.437354 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-multus-conf-dir\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.437397 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8b20fd8d-1ebb-47d0-8676-403b99dac1ec-os-release\") pod \"multus-additional-cni-plugins-2cfxk\" (UID: \"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\") " pod="openshift-multus/multus-additional-cni-plugins-2cfxk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.437429 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-host-var-lib-kubelet\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.437446 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp6mh\" (UniqueName: \"kubernetes.io/projected/89560008-8bdc-4640-af11-681d825e69d4-kube-api-access-dp6mh\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.437492 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-host-run-k8s-cni-cncf-io\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.437522 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-host-var-lib-kubelet\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.437546 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-host-var-lib-cni-bin\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.437563 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-host-var-lib-cni-multus\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.437614 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-host-var-lib-cni-bin\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.437676 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-host-var-lib-cni-multus\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.437579 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-host-run-multus-certs\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.437797 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-host-run-multus-certs\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.437802 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-etc-kubernetes\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.437809 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8b20fd8d-1ebb-47d0-8676-403b99dac1ec-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2cfxk\" (UID: \"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\") " pod="openshift-multus/multus-additional-cni-plugins-2cfxk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.437847 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/89560008-8bdc-4640-af11-681d825e69d4-etc-kubernetes\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.447656 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.454881 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp6mh\" (UniqueName: \"kubernetes.io/projected/89560008-8bdc-4640-af11-681d825e69d4-kube-api-access-dp6mh\") pod \"multus-b2ttm\" (UID: \"89560008-8bdc-4640-af11-681d825e69d4\") " pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.456604 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkc7b\" (UniqueName: \"kubernetes.io/projected/8b20fd8d-1ebb-47d0-8676-403b99dac1ec-kube-api-access-vkc7b\") pod \"multus-additional-cni-plugins-2cfxk\" (UID: \"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\") " pod="openshift-multus/multus-additional-cni-plugins-2cfxk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.462005 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.476351 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.479498 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.489806 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.494342 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.501321 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.503613 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.514504 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.523120 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.533886 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: W0129 16:10:16.535188 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b20fd8d_1ebb_47d0_8676_403b99dac1ec.slice/crio-62a298b4ef62dc42bfcda4ebb4fbb3d14f082ff2b065650707ddc28f29e2676c WatchSource:0}: Error finding container 62a298b4ef62dc42bfcda4ebb4fbb3d14f082ff2b065650707ddc28f29e2676c: Status 404 returned error can't find the container with id 62a298b4ef62dc42bfcda4ebb4fbb3d14f082ff2b065650707ddc28f29e2676c Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.539228 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-b2ttm" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.547237 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.549961 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sbnkt"] Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.550721 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: W0129 16:10:16.554721 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89560008_8bdc_4640_af11_681d825e69d4.slice/crio-f7e9b13edf785c5d027ffdc99f2efd71174884823eda661e75fcd22c34590d7a WatchSource:0}: Error finding container f7e9b13edf785c5d027ffdc99f2efd71174884823eda661e75fcd22c34590d7a: Status 404 returned error can't find the container with id f7e9b13edf785c5d027ffdc99f2efd71174884823eda661e75fcd22c34590d7a Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.554854 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.555007 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.555771 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.556038 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.556378 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.556370 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.560152 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.564878 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.583603 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.609280 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.627748 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.640543 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-kubelet\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.640605 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-run-netns\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.640635 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.640661 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/04b20f02-6c1e-4082-8233-8f06bda63195-env-overrides\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.640683 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-cni-netd\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.640719 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-run-systemd\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.640743 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-run-ovn\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.640764 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-node-log\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.640781 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/04b20f02-6c1e-4082-8233-8f06bda63195-ovnkube-config\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.640804 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/04b20f02-6c1e-4082-8233-8f06bda63195-ovn-node-metrics-cert\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.640847 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-systemd-units\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.640867 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-cni-bin\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.640896 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-etc-openvswitch\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.640917 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-log-socket\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.640953 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/04b20f02-6c1e-4082-8233-8f06bda63195-ovnkube-script-lib\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.640970 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vrsm\" (UniqueName: \"kubernetes.io/projected/04b20f02-6c1e-4082-8233-8f06bda63195-kube-api-access-7vrsm\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.640989 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-var-lib-openvswitch\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.641012 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-slash\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.641040 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-run-ovn-kubernetes\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.641058 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-run-openvswitch\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.647837 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.681738 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.697002 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.711874 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.724918 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.735009 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.741772 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/04b20f02-6c1e-4082-8233-8f06bda63195-env-overrides\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.741820 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-cni-netd\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.741859 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-run-systemd\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.741885 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-run-ovn\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.741911 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/04b20f02-6c1e-4082-8233-8f06bda63195-ovn-node-metrics-cert\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.741983 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-node-log\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.741983 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-cni-netd\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.742006 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/04b20f02-6c1e-4082-8233-8f06bda63195-ovnkube-config\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.742026 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-run-systemd\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.742067 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-systemd-units\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.742061 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-run-ovn\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.742105 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-systemd-units\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.742116 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-node-log\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.742183 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-cni-bin\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.742216 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-etc-openvswitch\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.742234 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-log-socket\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.742252 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/04b20f02-6c1e-4082-8233-8f06bda63195-ovnkube-script-lib\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.742268 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vrsm\" (UniqueName: \"kubernetes.io/projected/04b20f02-6c1e-4082-8233-8f06bda63195-kube-api-access-7vrsm\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.742290 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-var-lib-openvswitch\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.742323 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-slash\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.742340 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-run-ovn-kubernetes\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.742361 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-run-openvswitch\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.742377 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-kubelet\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.742394 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-run-netns\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.742418 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.742474 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.742500 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-cni-bin\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.742521 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-etc-openvswitch\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.742543 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-log-socket\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.743351 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-run-ovn-kubernetes\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.743411 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/04b20f02-6c1e-4082-8233-8f06bda63195-ovnkube-config\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.743491 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-slash\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.743501 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-kubelet\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.743517 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/04b20f02-6c1e-4082-8233-8f06bda63195-ovnkube-script-lib\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.743533 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-run-openvswitch\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.743511 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-var-lib-openvswitch\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.743554 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-run-netns\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.743627 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/04b20f02-6c1e-4082-8233-8f06bda63195-env-overrides\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.748557 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/04b20f02-6c1e-4082-8233-8f06bda63195-ovn-node-metrics-cert\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.758870 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.780988 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vrsm\" (UniqueName: \"kubernetes.io/projected/04b20f02-6c1e-4082-8233-8f06bda63195-kube-api-access-7vrsm\") pod \"ovnkube-node-sbnkt\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.819075 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.856756 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.874347 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:16 crc kubenswrapper[4714]: W0129 16:10:16.889140 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04b20f02_6c1e_4082_8233_8f06bda63195.slice/crio-17aad70fcdfcfc2aa07f37d1c4b0d894a800d6ca4c4b34e6100a73fad699fe31 WatchSource:0}: Error finding container 17aad70fcdfcfc2aa07f37d1c4b0d894a800d6ca4c4b34e6100a73fad699fe31: Status 404 returned error can't find the container with id 17aad70fcdfcfc2aa07f37d1c4b0d894a800d6ca4c4b34e6100a73fad699fe31 Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.909023 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.942720 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:16 crc kubenswrapper[4714]: I0129 16:10:16.978701 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.016900 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.060514 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.096299 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.136787 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 23:14:33.032377679 +0000 UTC Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.137830 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.178126 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.183250 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.183300 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.183310 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:17 crc kubenswrapper[4714]: E0129 16:10:17.183416 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:10:17 crc kubenswrapper[4714]: E0129 16:10:17.183572 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:10:17 crc kubenswrapper[4714]: E0129 16:10:17.183697 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.228753 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.260669 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.300811 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.309077 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-46dqc" event={"ID":"f846b283-5468-4014-ba05-da5bfffa2ebc","Type":"ContainerStarted","Data":"4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c"} Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.311332 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" event={"ID":"c8c765f3-89eb-4077-8829-03e86eb0c90c","Type":"ContainerStarted","Data":"bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54"} Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.312747 4714 generic.go:334] "Generic (PLEG): container finished" podID="04b20f02-6c1e-4082-8233-8f06bda63195" containerID="6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba" exitCode=0 Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.312836 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" event={"ID":"04b20f02-6c1e-4082-8233-8f06bda63195","Type":"ContainerDied","Data":"6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba"} Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.312895 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" event={"ID":"04b20f02-6c1e-4082-8233-8f06bda63195","Type":"ContainerStarted","Data":"17aad70fcdfcfc2aa07f37d1c4b0d894a800d6ca4c4b34e6100a73fad699fe31"} Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.314761 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b2ttm" event={"ID":"89560008-8bdc-4640-af11-681d825e69d4","Type":"ContainerStarted","Data":"a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a"} Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.314812 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b2ttm" event={"ID":"89560008-8bdc-4640-af11-681d825e69d4","Type":"ContainerStarted","Data":"f7e9b13edf785c5d027ffdc99f2efd71174884823eda661e75fcd22c34590d7a"} Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.317824 4714 generic.go:334] "Generic (PLEG): container finished" podID="8b20fd8d-1ebb-47d0-8676-403b99dac1ec" containerID="e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76" exitCode=0 Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.318816 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" event={"ID":"8b20fd8d-1ebb-47d0-8676-403b99dac1ec","Type":"ContainerDied","Data":"e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76"} Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.318870 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" event={"ID":"8b20fd8d-1ebb-47d0-8676-403b99dac1ec","Type":"ContainerStarted","Data":"62a298b4ef62dc42bfcda4ebb4fbb3d14f082ff2b065650707ddc28f29e2676c"} Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.339762 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.377570 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.420406 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.456837 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.496070 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.537972 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.577893 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.616862 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.664467 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.694204 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.738020 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.775807 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.820157 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.860331 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:17 crc kubenswrapper[4714]: I0129 16:10:17.909572 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.059995 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:10:18 crc kubenswrapper[4714]: E0129 16:10:18.060278 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:10:22.060252947 +0000 UTC m=+28.580754067 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.081392 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-c9jhc"] Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.081804 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-c9jhc" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.083430 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.083746 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.083997 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.084298 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.095986 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.113514 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.126483 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.137351 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 04:44:44.427951026 +0000 UTC Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.139894 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.160613 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.160660 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.160680 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.160705 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:18 crc kubenswrapper[4714]: E0129 16:10:18.160783 4714 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:10:18 crc kubenswrapper[4714]: E0129 16:10:18.160848 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:10:18 crc kubenswrapper[4714]: E0129 16:10:18.160866 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:10:18 crc kubenswrapper[4714]: E0129 16:10:18.160877 4714 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:18 crc kubenswrapper[4714]: E0129 16:10:18.160891 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:22.160863108 +0000 UTC m=+28.681364218 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:10:18 crc kubenswrapper[4714]: E0129 16:10:18.160941 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:22.160910799 +0000 UTC m=+28.681411919 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:18 crc kubenswrapper[4714]: E0129 16:10:18.160993 4714 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:10:18 crc kubenswrapper[4714]: E0129 16:10:18.161017 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:22.161011062 +0000 UTC m=+28.681512182 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:10:18 crc kubenswrapper[4714]: E0129 16:10:18.161049 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:10:18 crc kubenswrapper[4714]: E0129 16:10:18.161075 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:10:18 crc kubenswrapper[4714]: E0129 16:10:18.161094 4714 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:18 crc kubenswrapper[4714]: E0129 16:10:18.161146 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:22.161134056 +0000 UTC m=+28.681635176 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.186614 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.223414 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.256251 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.261628 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th2m6\" (UniqueName: \"kubernetes.io/projected/f80aba4c-9372-4bea-b537-cbd9b0a3e972-kube-api-access-th2m6\") pod \"node-ca-c9jhc\" (UID: \"f80aba4c-9372-4bea-b537-cbd9b0a3e972\") " pod="openshift-image-registry/node-ca-c9jhc" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.261700 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f80aba4c-9372-4bea-b537-cbd9b0a3e972-serviceca\") pod \"node-ca-c9jhc\" (UID: \"f80aba4c-9372-4bea-b537-cbd9b0a3e972\") " pod="openshift-image-registry/node-ca-c9jhc" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.261750 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f80aba4c-9372-4bea-b537-cbd9b0a3e972-host\") pod \"node-ca-c9jhc\" (UID: \"f80aba4c-9372-4bea-b537-cbd9b0a3e972\") " pod="openshift-image-registry/node-ca-c9jhc" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.295838 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.324496 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9"} Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.329013 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" event={"ID":"04b20f02-6c1e-4082-8233-8f06bda63195","Type":"ContainerStarted","Data":"e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4"} Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.329077 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" event={"ID":"04b20f02-6c1e-4082-8233-8f06bda63195","Type":"ContainerStarted","Data":"429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816"} Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.329093 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" event={"ID":"04b20f02-6c1e-4082-8233-8f06bda63195","Type":"ContainerStarted","Data":"b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc"} Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.329108 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" event={"ID":"04b20f02-6c1e-4082-8233-8f06bda63195","Type":"ContainerStarted","Data":"2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd"} Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.329119 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" event={"ID":"04b20f02-6c1e-4082-8233-8f06bda63195","Type":"ContainerStarted","Data":"7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016"} Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.331285 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" event={"ID":"8b20fd8d-1ebb-47d0-8676-403b99dac1ec","Type":"ContainerStarted","Data":"b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a"} Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.338691 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.362325 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f80aba4c-9372-4bea-b537-cbd9b0a3e972-serviceca\") pod \"node-ca-c9jhc\" (UID: \"f80aba4c-9372-4bea-b537-cbd9b0a3e972\") " pod="openshift-image-registry/node-ca-c9jhc" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.362404 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f80aba4c-9372-4bea-b537-cbd9b0a3e972-host\") pod \"node-ca-c9jhc\" (UID: \"f80aba4c-9372-4bea-b537-cbd9b0a3e972\") " pod="openshift-image-registry/node-ca-c9jhc" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.362436 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th2m6\" (UniqueName: \"kubernetes.io/projected/f80aba4c-9372-4bea-b537-cbd9b0a3e972-kube-api-access-th2m6\") pod \"node-ca-c9jhc\" (UID: \"f80aba4c-9372-4bea-b537-cbd9b0a3e972\") " pod="openshift-image-registry/node-ca-c9jhc" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.362588 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f80aba4c-9372-4bea-b537-cbd9b0a3e972-host\") pod \"node-ca-c9jhc\" (UID: \"f80aba4c-9372-4bea-b537-cbd9b0a3e972\") " pod="openshift-image-registry/node-ca-c9jhc" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.363761 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f80aba4c-9372-4bea-b537-cbd9b0a3e972-serviceca\") pod \"node-ca-c9jhc\" (UID: \"f80aba4c-9372-4bea-b537-cbd9b0a3e972\") " pod="openshift-image-registry/node-ca-c9jhc" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.379746 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.408878 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th2m6\" (UniqueName: \"kubernetes.io/projected/f80aba4c-9372-4bea-b537-cbd9b0a3e972-kube-api-access-th2m6\") pod \"node-ca-c9jhc\" (UID: \"f80aba4c-9372-4bea-b537-cbd9b0a3e972\") " pod="openshift-image-registry/node-ca-c9jhc" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.418419 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-c9jhc" Jan 29 16:10:18 crc kubenswrapper[4714]: W0129 16:10:18.434692 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf80aba4c_9372_4bea_b537_cbd9b0a3e972.slice/crio-e6137eb162566b71c4dcc93123d898651a75be1290e37dc810157e1ff49321df WatchSource:0}: Error finding container e6137eb162566b71c4dcc93123d898651a75be1290e37dc810157e1ff49321df: Status 404 returned error can't find the container with id e6137eb162566b71c4dcc93123d898651a75be1290e37dc810157e1ff49321df Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.443159 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.479059 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.517589 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.561766 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.595572 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.653075 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.683177 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.724249 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.758042 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.799772 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.838616 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.890605 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.923378 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.955690 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:18 crc kubenswrapper[4714]: I0129 16:10:18.998880 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.037780 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.083911 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.122910 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.137998 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 06:01:02.093902708 +0000 UTC Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.166078 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.183186 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.183239 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.183300 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:19 crc kubenswrapper[4714]: E0129 16:10:19.183323 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:10:19 crc kubenswrapper[4714]: E0129 16:10:19.183429 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:10:19 crc kubenswrapper[4714]: E0129 16:10:19.183688 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.209317 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.341208 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" event={"ID":"04b20f02-6c1e-4082-8233-8f06bda63195","Type":"ContainerStarted","Data":"5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7"} Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.342966 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-c9jhc" event={"ID":"f80aba4c-9372-4bea-b537-cbd9b0a3e972","Type":"ContainerStarted","Data":"cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e"} Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.343022 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-c9jhc" event={"ID":"f80aba4c-9372-4bea-b537-cbd9b0a3e972","Type":"ContainerStarted","Data":"e6137eb162566b71c4dcc93123d898651a75be1290e37dc810157e1ff49321df"} Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.345306 4714 generic.go:334] "Generic (PLEG): container finished" podID="8b20fd8d-1ebb-47d0-8676-403b99dac1ec" containerID="b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a" exitCode=0 Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.345384 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" event={"ID":"8b20fd8d-1ebb-47d0-8676-403b99dac1ec","Type":"ContainerDied","Data":"b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a"} Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.371480 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.393615 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.422775 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.441708 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.468977 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.481827 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.495212 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.514649 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.558066 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.599951 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.640799 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.682649 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.724621 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.763543 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.803454 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.844602 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.881149 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.920253 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:19 crc kubenswrapper[4714]: I0129 16:10:19.962816 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.008225 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.039270 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.079416 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.122324 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.138405 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 01:38:02.233237927 +0000 UTC Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.160839 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.204484 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.251064 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.281666 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.321482 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.357529 4714 generic.go:334] "Generic (PLEG): container finished" podID="8b20fd8d-1ebb-47d0-8676-403b99dac1ec" containerID="312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae" exitCode=0 Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.357658 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" event={"ID":"8b20fd8d-1ebb-47d0-8676-403b99dac1ec","Type":"ContainerDied","Data":"312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae"} Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.367736 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.402355 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.446140 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.479597 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.516441 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.560217 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.596101 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.639749 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.677196 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.722978 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.754596 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.777050 4714 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.778670 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.778706 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.778718 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.778834 4714 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.795524 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.850014 4714 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.850497 4714 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.852218 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.852281 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.852299 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.852324 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.852344 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:20Z","lastTransitionTime":"2026-01-29T16:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:20 crc kubenswrapper[4714]: E0129 16:10:20.871341 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.876472 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.876513 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.876531 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.876550 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.876562 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:20Z","lastTransitionTime":"2026-01-29T16:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.881323 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: E0129 16:10:20.891514 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.896301 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.896357 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.896376 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.896401 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.896419 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:20Z","lastTransitionTime":"2026-01-29T16:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:20 crc kubenswrapper[4714]: E0129 16:10:20.913419 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.916393 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.918115 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.918175 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.918191 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.918220 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.918237 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:20Z","lastTransitionTime":"2026-01-29T16:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:20 crc kubenswrapper[4714]: E0129 16:10:20.929441 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.934287 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.934350 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.934369 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.934398 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.934417 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:20Z","lastTransitionTime":"2026-01-29T16:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:20 crc kubenswrapper[4714]: E0129 16:10:20.952475 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:20 crc kubenswrapper[4714]: E0129 16:10:20.952664 4714 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.954811 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.954872 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.954894 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.954919 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.954959 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:20Z","lastTransitionTime":"2026-01-29T16:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:20 crc kubenswrapper[4714]: I0129 16:10:20.962778 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:20.999995 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.039600 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.057609 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.057676 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.057695 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.057721 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.057740 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:21Z","lastTransitionTime":"2026-01-29T16:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.139383 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 19:37:42.168647484 +0000 UTC Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.160554 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.160586 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.160599 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.160615 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.160627 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:21Z","lastTransitionTime":"2026-01-29T16:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.184072 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:21 crc kubenswrapper[4714]: E0129 16:10:21.184189 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.184553 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:21 crc kubenswrapper[4714]: E0129 16:10:21.184632 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.184706 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:21 crc kubenswrapper[4714]: E0129 16:10:21.184870 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.263425 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.263480 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.263499 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.263522 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.263539 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:21Z","lastTransitionTime":"2026-01-29T16:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.366123 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.366183 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.366206 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.366239 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.366320 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:21Z","lastTransitionTime":"2026-01-29T16:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.368430 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" event={"ID":"04b20f02-6c1e-4082-8233-8f06bda63195","Type":"ContainerStarted","Data":"662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf"} Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.372228 4714 generic.go:334] "Generic (PLEG): container finished" podID="8b20fd8d-1ebb-47d0-8676-403b99dac1ec" containerID="37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7" exitCode=0 Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.372278 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" event={"ID":"8b20fd8d-1ebb-47d0-8676-403b99dac1ec","Type":"ContainerDied","Data":"37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7"} Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.399803 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.421866 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.442573 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.462246 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.477175 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.477249 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.477276 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.477300 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.477317 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:21Z","lastTransitionTime":"2026-01-29T16:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.482219 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.500891 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.520620 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.540420 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.554452 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.569286 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.580272 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.580340 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.580367 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.580400 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.580425 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:21Z","lastTransitionTime":"2026-01-29T16:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.584117 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.635781 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.660501 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.683728 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.683760 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.683770 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.683783 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.683792 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:21Z","lastTransitionTime":"2026-01-29T16:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.684957 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.698300 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.786031 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.786070 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.786078 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.786093 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.786102 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:21Z","lastTransitionTime":"2026-01-29T16:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.888419 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.888460 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.888470 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.888485 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.888494 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:21Z","lastTransitionTime":"2026-01-29T16:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.992899 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.993001 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.993036 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.993086 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:21 crc kubenswrapper[4714]: I0129 16:10:21.993115 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:21Z","lastTransitionTime":"2026-01-29T16:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.096421 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.096488 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.096506 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.096534 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.096553 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:22Z","lastTransitionTime":"2026-01-29T16:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.111019 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:10:22 crc kubenswrapper[4714]: E0129 16:10:22.111288 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:10:30.111251322 +0000 UTC m=+36.631752482 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.140542 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 03:40:19.061605713 +0000 UTC Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.199758 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.199845 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.199870 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.199904 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.199928 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:22Z","lastTransitionTime":"2026-01-29T16:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.212235 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.212311 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.212382 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.212468 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:22 crc kubenswrapper[4714]: E0129 16:10:22.212552 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:10:22 crc kubenswrapper[4714]: E0129 16:10:22.212600 4714 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:10:22 crc kubenswrapper[4714]: E0129 16:10:22.212605 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:10:22 crc kubenswrapper[4714]: E0129 16:10:22.212642 4714 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:22 crc kubenswrapper[4714]: E0129 16:10:22.212683 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:30.212656906 +0000 UTC m=+36.733158066 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:10:22 crc kubenswrapper[4714]: E0129 16:10:22.212723 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:30.212696457 +0000 UTC m=+36.733197617 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:22 crc kubenswrapper[4714]: E0129 16:10:22.212812 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:10:22 crc kubenswrapper[4714]: E0129 16:10:22.212833 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:10:22 crc kubenswrapper[4714]: E0129 16:10:22.212849 4714 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:22 crc kubenswrapper[4714]: E0129 16:10:22.212893 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:30.212878683 +0000 UTC m=+36.733379843 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:22 crc kubenswrapper[4714]: E0129 16:10:22.213015 4714 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:10:22 crc kubenswrapper[4714]: E0129 16:10:22.213058 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:30.213044857 +0000 UTC m=+36.733546017 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.302739 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.302781 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.302796 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.302813 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.302825 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:22Z","lastTransitionTime":"2026-01-29T16:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.382067 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" event={"ID":"8b20fd8d-1ebb-47d0-8676-403b99dac1ec","Type":"ContainerStarted","Data":"93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59"} Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.405873 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.406001 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.406037 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.406068 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.406094 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:22Z","lastTransitionTime":"2026-01-29T16:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.409746 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.433721 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.455835 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.466998 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.487532 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.504609 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.508783 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.508812 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.508823 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.508838 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.508848 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:22Z","lastTransitionTime":"2026-01-29T16:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.521734 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.535006 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.560169 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.575010 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.590139 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.610461 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.611128 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.611196 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.611214 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.611238 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.611257 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:22Z","lastTransitionTime":"2026-01-29T16:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.623370 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.640218 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.666854 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.713918 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.714236 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.714248 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.714261 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.714270 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:22Z","lastTransitionTime":"2026-01-29T16:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.817105 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.817178 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.817198 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.817731 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.818020 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:22Z","lastTransitionTime":"2026-01-29T16:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.921014 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.921050 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.921068 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.921097 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:22 crc kubenswrapper[4714]: I0129 16:10:22.921117 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:22Z","lastTransitionTime":"2026-01-29T16:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.024730 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.024773 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.024784 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.024801 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.024816 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:23Z","lastTransitionTime":"2026-01-29T16:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.128235 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.128322 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.128348 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.128379 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.128401 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:23Z","lastTransitionTime":"2026-01-29T16:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.141612 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 11:36:46.142107128 +0000 UTC Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.184283 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.184320 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.184466 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:23 crc kubenswrapper[4714]: E0129 16:10:23.184458 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:10:23 crc kubenswrapper[4714]: E0129 16:10:23.184632 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:10:23 crc kubenswrapper[4714]: E0129 16:10:23.184784 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.233591 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.233654 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.233671 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.233691 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.233706 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:23Z","lastTransitionTime":"2026-01-29T16:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.337180 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.337238 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.337256 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.337280 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.337297 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:23Z","lastTransitionTime":"2026-01-29T16:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.390533 4714 generic.go:334] "Generic (PLEG): container finished" podID="8b20fd8d-1ebb-47d0-8676-403b99dac1ec" containerID="93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59" exitCode=0 Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.390632 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" event={"ID":"8b20fd8d-1ebb-47d0-8676-403b99dac1ec","Type":"ContainerDied","Data":"93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59"} Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.400495 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" event={"ID":"04b20f02-6c1e-4082-8233-8f06bda63195","Type":"ContainerStarted","Data":"7ce018ea1685c4d1a8769fd746ba32c24d3927e84ec15fd550a1a476e344ad5e"} Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.400980 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.401044 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.408060 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.425737 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.440736 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.440776 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.440784 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.440801 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.440811 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:23Z","lastTransitionTime":"2026-01-29T16:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.461226 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.466451 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.467291 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.488847 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.507716 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.525388 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.538673 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.543486 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.543510 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.543518 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.543533 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.543542 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:23Z","lastTransitionTime":"2026-01-29T16:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.549456 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.564305 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.575397 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.596526 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.612720 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.634870 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.647085 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.647120 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.647128 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.647143 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.647153 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:23Z","lastTransitionTime":"2026-01-29T16:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.648306 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.658642 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.671610 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.683372 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.695865 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.714636 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.727423 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.744098 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce018ea1685c4d1a8769fd746ba32c24d3927e84ec15fd550a1a476e344ad5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.749601 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.749629 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.749638 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.749662 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.749671 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:23Z","lastTransitionTime":"2026-01-29T16:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.754809 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.770534 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.781790 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.792192 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.802894 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.812593 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.823407 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.838906 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.852632 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.852678 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.852690 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.852707 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.852718 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:23Z","lastTransitionTime":"2026-01-29T16:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.859338 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.955270 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.955328 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.955347 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.955373 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.955394 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:23Z","lastTransitionTime":"2026-01-29T16:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:23 crc kubenswrapper[4714]: I0129 16:10:23.981492 4714 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.059033 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.059090 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.059107 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.059130 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.059146 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:24Z","lastTransitionTime":"2026-01-29T16:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.142596 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 20:25:44.147866268 +0000 UTC Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.162873 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.162957 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.162976 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.163001 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.163018 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:24Z","lastTransitionTime":"2026-01-29T16:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.201806 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.227687 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.245648 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.264622 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.264669 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.264682 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.264699 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.264712 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:24Z","lastTransitionTime":"2026-01-29T16:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.268700 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.287005 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.299966 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.331027 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce018ea1685c4d1a8769fd746ba32c24d3927e84ec15fd550a1a476e344ad5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.345365 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.360876 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.367131 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.367201 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.367221 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.367247 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.367266 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:24Z","lastTransitionTime":"2026-01-29T16:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.377695 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.388011 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.407224 4714 generic.go:334] "Generic (PLEG): container finished" podID="8b20fd8d-1ebb-47d0-8676-403b99dac1ec" containerID="1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2" exitCode=0 Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.407294 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" event={"ID":"8b20fd8d-1ebb-47d0-8676-403b99dac1ec","Type":"ContainerDied","Data":"1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2"} Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.407357 4714 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.410908 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.427779 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.442355 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.461773 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.470070 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.470120 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.470131 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.470150 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.470223 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:24Z","lastTransitionTime":"2026-01-29T16:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.484572 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.503015 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.515412 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.533675 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.557287 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.572869 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.572917 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.572951 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.572971 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.572983 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:24Z","lastTransitionTime":"2026-01-29T16:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.573271 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.592908 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.611354 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.626517 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.639351 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.670705 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce018ea1685c4d1a8769fd746ba32c24d3927e84ec15fd550a1a476e344ad5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.675501 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.675568 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.675594 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.675630 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.675656 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:24Z","lastTransitionTime":"2026-01-29T16:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.688349 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.704380 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.721675 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.736270 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.778104 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.778138 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.778150 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.778166 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.778178 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:24Z","lastTransitionTime":"2026-01-29T16:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.881605 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.881644 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.881655 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.881671 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.881683 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:24Z","lastTransitionTime":"2026-01-29T16:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.984703 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.984763 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.984780 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.984800 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:24 crc kubenswrapper[4714]: I0129 16:10:24.984817 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:24Z","lastTransitionTime":"2026-01-29T16:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.087996 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.088086 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.088106 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.088141 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.088166 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:25Z","lastTransitionTime":"2026-01-29T16:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.142987 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 03:54:54.259891104 +0000 UTC Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.183867 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.183909 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.183887 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:25 crc kubenswrapper[4714]: E0129 16:10:25.184130 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:10:25 crc kubenswrapper[4714]: E0129 16:10:25.184218 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:10:25 crc kubenswrapper[4714]: E0129 16:10:25.184321 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.190400 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.190429 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.190437 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.190449 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.190458 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:25Z","lastTransitionTime":"2026-01-29T16:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.292790 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.292828 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.292842 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.292861 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.292873 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:25Z","lastTransitionTime":"2026-01-29T16:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.395843 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.395882 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.395893 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.395909 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.395920 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:25Z","lastTransitionTime":"2026-01-29T16:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.413287 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" event={"ID":"8b20fd8d-1ebb-47d0-8676-403b99dac1ec","Type":"ContainerStarted","Data":"0ec9086c2f128a0ee5f564692cf28086688a45cf9d6b328c33446ad8c6f2f4d8"} Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.413314 4714 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.432363 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.446276 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.460896 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.478380 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.498559 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.498648 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.498675 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.498710 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.498738 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:25Z","lastTransitionTime":"2026-01-29T16:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.512086 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.530582 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.545869 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.565123 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.586717 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.602698 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.602777 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.602803 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.602834 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.602860 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:25Z","lastTransitionTime":"2026-01-29T16:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.605819 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.630126 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce018ea1685c4d1a8769fd746ba32c24d3927e84ec15fd550a1a476e344ad5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.646495 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.661790 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.681191 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ec9086c2f128a0ee5f564692cf28086688a45cf9d6b328c33446ad8c6f2f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.690853 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.706286 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.706505 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.706515 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.706530 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.706539 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:25Z","lastTransitionTime":"2026-01-29T16:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.808867 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.808947 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.808965 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.808986 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.809011 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:25Z","lastTransitionTime":"2026-01-29T16:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.912464 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.912862 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.913029 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.913137 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:25 crc kubenswrapper[4714]: I0129 16:10:25.913247 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:25Z","lastTransitionTime":"2026-01-29T16:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.021847 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.022166 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.022292 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.022385 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.022480 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:26Z","lastTransitionTime":"2026-01-29T16:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.125727 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.125783 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.125800 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.125821 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.125838 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:26Z","lastTransitionTime":"2026-01-29T16:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.144497 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 11:48:04.767504633 +0000 UTC Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.229016 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.229077 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.229095 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.229116 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.229133 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:26Z","lastTransitionTime":"2026-01-29T16:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.332612 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.332689 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.332716 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.332748 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.332771 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:26Z","lastTransitionTime":"2026-01-29T16:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.420447 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sbnkt_04b20f02-6c1e-4082-8233-8f06bda63195/ovnkube-controller/0.log" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.424968 4714 generic.go:334] "Generic (PLEG): container finished" podID="04b20f02-6c1e-4082-8233-8f06bda63195" containerID="7ce018ea1685c4d1a8769fd746ba32c24d3927e84ec15fd550a1a476e344ad5e" exitCode=1 Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.425023 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" event={"ID":"04b20f02-6c1e-4082-8233-8f06bda63195","Type":"ContainerDied","Data":"7ce018ea1685c4d1a8769fd746ba32c24d3927e84ec15fd550a1a476e344ad5e"} Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.426194 4714 scope.go:117] "RemoveContainer" containerID="7ce018ea1685c4d1a8769fd746ba32c24d3927e84ec15fd550a1a476e344ad5e" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.434994 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.435047 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.435064 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.435087 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.435105 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:26Z","lastTransitionTime":"2026-01-29T16:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.456791 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.478446 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.495156 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.515709 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.535354 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.537746 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.537820 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.537843 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.537870 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.537888 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:26Z","lastTransitionTime":"2026-01-29T16:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.550633 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.571525 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce018ea1685c4d1a8769fd746ba32c24d3927e84ec15fd550a1a476e344ad5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce018ea1685c4d1a8769fd746ba32c24d3927e84ec15fd550a1a476e344ad5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"message\\\":\\\"9 16:10:25.829056 6004 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:10:25.829066 6004 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:10:25.829080 6004 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:10:25.829136 6004 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 16:10:25.829145 6004 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 16:10:25.829167 6004 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 16:10:25.829172 6004 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 16:10:25.829187 6004 factory.go:656] Stopping watch factory\\\\nI0129 16:10:25.829200 6004 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:10:25.829231 6004 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:10:25.829240 6004 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:10:25.829249 6004 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:10:25.829257 6004 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:10:25.829264 6004 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 16:10:25.829277 6004 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 16:10:25.829285 6004 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.593046 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.620476 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.639112 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ec9086c2f128a0ee5f564692cf28086688a45cf9d6b328c33446ad8c6f2f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.640023 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.640057 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.640067 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.640082 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.640091 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:26Z","lastTransitionTime":"2026-01-29T16:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.654459 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.669745 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.691840 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.720001 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.742588 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.742644 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.742655 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.742673 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.742686 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:26Z","lastTransitionTime":"2026-01-29T16:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.745046 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.857526 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.858222 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.858279 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.858310 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.858331 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:26Z","lastTransitionTime":"2026-01-29T16:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.961611 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.961644 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.961654 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.961670 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:26 crc kubenswrapper[4714]: I0129 16:10:26.961681 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:26Z","lastTransitionTime":"2026-01-29T16:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.065431 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.065503 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.065538 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.065569 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.065592 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:27Z","lastTransitionTime":"2026-01-29T16:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.129718 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.145547 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 07:32:17.372594995 +0000 UTC Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.148649 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.163311 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.167648 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.167690 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.167703 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.167723 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.167755 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:27Z","lastTransitionTime":"2026-01-29T16:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.174534 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.183178 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.183233 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.183260 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:27 crc kubenswrapper[4714]: E0129 16:10:27.183585 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:10:27 crc kubenswrapper[4714]: E0129 16:10:27.183734 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:10:27 crc kubenswrapper[4714]: E0129 16:10:27.183922 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.188830 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.210360 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.226323 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.242150 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.255663 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.269586 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.271467 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.271526 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.271546 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.271570 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.271589 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:27Z","lastTransitionTime":"2026-01-29T16:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.285055 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.309133 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce018ea1685c4d1a8769fd746ba32c24d3927e84ec15fd550a1a476e344ad5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce018ea1685c4d1a8769fd746ba32c24d3927e84ec15fd550a1a476e344ad5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"message\\\":\\\"9 16:10:25.829056 6004 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:10:25.829066 6004 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:10:25.829080 6004 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:10:25.829136 6004 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 16:10:25.829145 6004 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 16:10:25.829167 6004 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 16:10:25.829172 6004 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 16:10:25.829187 6004 factory.go:656] Stopping watch factory\\\\nI0129 16:10:25.829200 6004 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:10:25.829231 6004 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:10:25.829240 6004 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:10:25.829249 6004 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:10:25.829257 6004 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:10:25.829264 6004 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 16:10:25.829277 6004 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 16:10:25.829285 6004 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.324719 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.341037 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.367280 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ec9086c2f128a0ee5f564692cf28086688a45cf9d6b328c33446ad8c6f2f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.373341 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.373410 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.373431 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.373455 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.373471 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:27Z","lastTransitionTime":"2026-01-29T16:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.380539 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.431606 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sbnkt_04b20f02-6c1e-4082-8233-8f06bda63195/ovnkube-controller/0.log" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.435915 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" event={"ID":"04b20f02-6c1e-4082-8233-8f06bda63195","Type":"ContainerStarted","Data":"5156bbcc6abbf40a22215f2642e81fed55f603d494e294703866350469208075"} Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.436064 4714 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.469831 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.484448 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.484518 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.484535 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.484559 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.484581 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:27Z","lastTransitionTime":"2026-01-29T16:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.488031 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.500865 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.515700 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.528903 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.542867 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.559184 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.571236 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.587150 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.587197 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.587208 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.587225 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.587236 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:27Z","lastTransitionTime":"2026-01-29T16:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.591967 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5156bbcc6abbf40a22215f2642e81fed55f603d494e294703866350469208075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce018ea1685c4d1a8769fd746ba32c24d3927e84ec15fd550a1a476e344ad5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"message\\\":\\\"9 16:10:25.829056 6004 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:10:25.829066 6004 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:10:25.829080 6004 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:10:25.829136 6004 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 16:10:25.829145 6004 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 16:10:25.829167 6004 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 16:10:25.829172 6004 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 16:10:25.829187 6004 factory.go:656] Stopping watch factory\\\\nI0129 16:10:25.829200 6004 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:10:25.829231 6004 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:10:25.829240 6004 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:10:25.829249 6004 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:10:25.829257 6004 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:10:25.829264 6004 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 16:10:25.829277 6004 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 16:10:25.829285 6004 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.602897 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.612858 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.622029 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.632861 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.644160 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.657879 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ec9086c2f128a0ee5f564692cf28086688a45cf9d6b328c33446ad8c6f2f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.689689 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.689753 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.689767 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.689787 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.689802 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:27Z","lastTransitionTime":"2026-01-29T16:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.793163 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.793228 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.793245 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.793272 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.793292 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:27Z","lastTransitionTime":"2026-01-29T16:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.896744 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.896857 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.896881 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.896911 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:27 crc kubenswrapper[4714]: I0129 16:10:27.896962 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:27Z","lastTransitionTime":"2026-01-29T16:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.000527 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.000597 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.000627 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.000659 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.000682 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:28Z","lastTransitionTime":"2026-01-29T16:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.105057 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.105168 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.105187 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.105213 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.105234 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:28Z","lastTransitionTime":"2026-01-29T16:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.146455 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 15:24:40.250278632 +0000 UTC Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.207861 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.207904 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.207914 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.207949 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.207961 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:28Z","lastTransitionTime":"2026-01-29T16:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.310599 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.310645 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.310656 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.310673 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.310685 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:28Z","lastTransitionTime":"2026-01-29T16:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.413957 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.414007 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.414020 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.414044 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.414060 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:28Z","lastTransitionTime":"2026-01-29T16:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.442396 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sbnkt_04b20f02-6c1e-4082-8233-8f06bda63195/ovnkube-controller/1.log" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.443410 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sbnkt_04b20f02-6c1e-4082-8233-8f06bda63195/ovnkube-controller/0.log" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.447882 4714 generic.go:334] "Generic (PLEG): container finished" podID="04b20f02-6c1e-4082-8233-8f06bda63195" containerID="5156bbcc6abbf40a22215f2642e81fed55f603d494e294703866350469208075" exitCode=1 Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.448009 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" event={"ID":"04b20f02-6c1e-4082-8233-8f06bda63195","Type":"ContainerDied","Data":"5156bbcc6abbf40a22215f2642e81fed55f603d494e294703866350469208075"} Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.448116 4714 scope.go:117] "RemoveContainer" containerID="7ce018ea1685c4d1a8769fd746ba32c24d3927e84ec15fd550a1a476e344ad5e" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.448807 4714 scope.go:117] "RemoveContainer" containerID="5156bbcc6abbf40a22215f2642e81fed55f603d494e294703866350469208075" Jan 29 16:10:28 crc kubenswrapper[4714]: E0129 16:10:28.449031 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-sbnkt_openshift-ovn-kubernetes(04b20f02-6c1e-4082-8233-8f06bda63195)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.483328 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.505382 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.516926 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.517011 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.517029 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.517055 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.517078 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:28Z","lastTransitionTime":"2026-01-29T16:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.527708 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.543701 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.562670 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.579903 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.620519 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.620590 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.620615 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.620645 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.620668 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:28Z","lastTransitionTime":"2026-01-29T16:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.620831 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5156bbcc6abbf40a22215f2642e81fed55f603d494e294703866350469208075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce018ea1685c4d1a8769fd746ba32c24d3927e84ec15fd550a1a476e344ad5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"message\\\":\\\"9 16:10:25.829056 6004 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:10:25.829066 6004 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:10:25.829080 6004 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:10:25.829136 6004 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 16:10:25.829145 6004 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 16:10:25.829167 6004 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 16:10:25.829172 6004 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 16:10:25.829187 6004 factory.go:656] Stopping watch factory\\\\nI0129 16:10:25.829200 6004 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:10:25.829231 6004 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:10:25.829240 6004 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:10:25.829249 6004 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:10:25.829257 6004 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:10:25.829264 6004 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 16:10:25.829277 6004 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 16:10:25.829285 6004 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5156bbcc6abbf40a22215f2642e81fed55f603d494e294703866350469208075\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"message\\\":\\\"able:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:10:27.532704 6150 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-c9jhc\\\\nI0129 16:10:27.533048 6150 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 16:10:27.533063 6150 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fail\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.642650 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.662478 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.686790 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ec9086c2f128a0ee5f564692cf28086688a45cf9d6b328c33446ad8c6f2f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.699739 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.723078 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.723901 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.723991 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.724016 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.724050 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.724073 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:28Z","lastTransitionTime":"2026-01-29T16:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.745677 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.765119 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.779142 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.828473 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.828537 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.828556 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.828581 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.828599 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:28Z","lastTransitionTime":"2026-01-29T16:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.932248 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.932342 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.932363 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.932388 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:28 crc kubenswrapper[4714]: I0129 16:10:28.932439 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:28Z","lastTransitionTime":"2026-01-29T16:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.035138 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.035209 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.035229 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.035256 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.035279 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:29Z","lastTransitionTime":"2026-01-29T16:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.138039 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.138102 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.138120 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.138144 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.138161 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:29Z","lastTransitionTime":"2026-01-29T16:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.147445 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 08:14:55.878801094 +0000 UTC Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.184091 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.184168 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.184194 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:29 crc kubenswrapper[4714]: E0129 16:10:29.184269 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:10:29 crc kubenswrapper[4714]: E0129 16:10:29.184496 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:10:29 crc kubenswrapper[4714]: E0129 16:10:29.184770 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.241438 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.241514 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.241540 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.241571 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.241594 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:29Z","lastTransitionTime":"2026-01-29T16:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.344258 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.344325 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.344342 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.344364 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.344381 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:29Z","lastTransitionTime":"2026-01-29T16:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.424973 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw"] Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.425705 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.428271 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.428311 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.446909 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.446977 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.446990 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.447008 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.447018 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:29Z","lastTransitionTime":"2026-01-29T16:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.452814 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sbnkt_04b20f02-6c1e-4082-8233-8f06bda63195/ovnkube-controller/1.log" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.455193 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.475497 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.487607 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2932c3bd-04c7-4494-8d43-03c4524a353f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tg8sw\" (UID: \"2932c3bd-04c7-4494-8d43-03c4524a353f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.487658 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2932c3bd-04c7-4494-8d43-03c4524a353f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tg8sw\" (UID: \"2932c3bd-04c7-4494-8d43-03c4524a353f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.487681 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvrtd\" (UniqueName: \"kubernetes.io/projected/2932c3bd-04c7-4494-8d43-03c4524a353f-kube-api-access-fvrtd\") pod \"ovnkube-control-plane-749d76644c-tg8sw\" (UID: \"2932c3bd-04c7-4494-8d43-03c4524a353f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.487707 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2932c3bd-04c7-4494-8d43-03c4524a353f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tg8sw\" (UID: \"2932c3bd-04c7-4494-8d43-03c4524a353f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.492367 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.510952 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.524584 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.535398 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.550236 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.550284 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.550297 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.550314 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.550328 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:29Z","lastTransitionTime":"2026-01-29T16:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.553166 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5156bbcc6abbf40a22215f2642e81fed55f603d494e294703866350469208075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce018ea1685c4d1a8769fd746ba32c24d3927e84ec15fd550a1a476e344ad5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"message\\\":\\\"9 16:10:25.829056 6004 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:10:25.829066 6004 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:10:25.829080 6004 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:10:25.829136 6004 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 16:10:25.829145 6004 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 16:10:25.829167 6004 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 16:10:25.829172 6004 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 16:10:25.829187 6004 factory.go:656] Stopping watch factory\\\\nI0129 16:10:25.829200 6004 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:10:25.829231 6004 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:10:25.829240 6004 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:10:25.829249 6004 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:10:25.829257 6004 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:10:25.829264 6004 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 16:10:25.829277 6004 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 16:10:25.829285 6004 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5156bbcc6abbf40a22215f2642e81fed55f603d494e294703866350469208075\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"message\\\":\\\"able:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:10:27.532704 6150 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-c9jhc\\\\nI0129 16:10:27.533048 6150 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 16:10:27.533063 6150 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fail\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.565439 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2932c3bd-04c7-4494-8d43-03c4524a353f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tg8sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.578877 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.588901 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2932c3bd-04c7-4494-8d43-03c4524a353f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tg8sw\" (UID: \"2932c3bd-04c7-4494-8d43-03c4524a353f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.589046 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2932c3bd-04c7-4494-8d43-03c4524a353f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tg8sw\" (UID: \"2932c3bd-04c7-4494-8d43-03c4524a353f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.589087 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2932c3bd-04c7-4494-8d43-03c4524a353f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tg8sw\" (UID: \"2932c3bd-04c7-4494-8d43-03c4524a353f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.589120 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvrtd\" (UniqueName: \"kubernetes.io/projected/2932c3bd-04c7-4494-8d43-03c4524a353f-kube-api-access-fvrtd\") pod \"ovnkube-control-plane-749d76644c-tg8sw\" (UID: \"2932c3bd-04c7-4494-8d43-03c4524a353f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.590210 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2932c3bd-04c7-4494-8d43-03c4524a353f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tg8sw\" (UID: \"2932c3bd-04c7-4494-8d43-03c4524a353f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.590785 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2932c3bd-04c7-4494-8d43-03c4524a353f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tg8sw\" (UID: \"2932c3bd-04c7-4494-8d43-03c4524a353f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.594102 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.596268 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2932c3bd-04c7-4494-8d43-03c4524a353f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tg8sw\" (UID: \"2932c3bd-04c7-4494-8d43-03c4524a353f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.613863 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ec9086c2f128a0ee5f564692cf28086688a45cf9d6b328c33446ad8c6f2f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.619797 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvrtd\" (UniqueName: \"kubernetes.io/projected/2932c3bd-04c7-4494-8d43-03c4524a353f-kube-api-access-fvrtd\") pod \"ovnkube-control-plane-749d76644c-tg8sw\" (UID: \"2932c3bd-04c7-4494-8d43-03c4524a353f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.627405 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.641982 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.652536 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.652580 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.652613 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.652631 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.652642 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:29Z","lastTransitionTime":"2026-01-29T16:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.660413 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.678154 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.696752 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.746697 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.755699 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.755765 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.755785 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.755811 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.755830 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:29Z","lastTransitionTime":"2026-01-29T16:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:29 crc kubenswrapper[4714]: W0129 16:10:29.767948 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2932c3bd_04c7_4494_8d43_03c4524a353f.slice/crio-e67687fd660061194ff48003829241962cc072844249c60589fca5b2a4a7d8ef WatchSource:0}: Error finding container e67687fd660061194ff48003829241962cc072844249c60589fca5b2a4a7d8ef: Status 404 returned error can't find the container with id e67687fd660061194ff48003829241962cc072844249c60589fca5b2a4a7d8ef Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.859437 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.859481 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.859493 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.859513 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.859526 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:29Z","lastTransitionTime":"2026-01-29T16:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.962318 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.962366 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.962383 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.962405 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:29 crc kubenswrapper[4714]: I0129 16:10:29.962421 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:29Z","lastTransitionTime":"2026-01-29T16:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.065651 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.065688 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.065701 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.065719 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.065732 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:30Z","lastTransitionTime":"2026-01-29T16:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.148341 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 10:07:15.895851809 +0000 UTC Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.168739 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-2w92b"] Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.169584 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.169604 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:30 crc kubenswrapper[4714]: E0129 16:10:30.169694 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.169720 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.169738 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.169757 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.169767 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:30Z","lastTransitionTime":"2026-01-29T16:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.184293 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.196115 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:10:30 crc kubenswrapper[4714]: E0129 16:10:30.196308 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:10:46.196276266 +0000 UTC m=+52.716777386 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.196451 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwv7p\" (UniqueName: \"kubernetes.io/projected/791456e8-8d95-4cdb-8fd1-d06a7586b328-kube-api-access-qwv7p\") pod \"network-metrics-daemon-2w92b\" (UID: \"791456e8-8d95-4cdb-8fd1-d06a7586b328\") " pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.196495 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/791456e8-8d95-4cdb-8fd1-d06a7586b328-metrics-certs\") pod \"network-metrics-daemon-2w92b\" (UID: \"791456e8-8d95-4cdb-8fd1-d06a7586b328\") " pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.201522 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.221685 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.248478 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.259178 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.271887 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.271946 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.271957 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.271995 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.272007 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:30Z","lastTransitionTime":"2026-01-29T16:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.275696 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5156bbcc6abbf40a22215f2642e81fed55f603d494e294703866350469208075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce018ea1685c4d1a8769fd746ba32c24d3927e84ec15fd550a1a476e344ad5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"message\\\":\\\"9 16:10:25.829056 6004 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:10:25.829066 6004 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:10:25.829080 6004 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:10:25.829136 6004 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 16:10:25.829145 6004 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 16:10:25.829167 6004 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 16:10:25.829172 6004 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 16:10:25.829187 6004 factory.go:656] Stopping watch factory\\\\nI0129 16:10:25.829200 6004 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:10:25.829231 6004 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:10:25.829240 6004 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:10:25.829249 6004 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:10:25.829257 6004 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:10:25.829264 6004 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 16:10:25.829277 6004 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 16:10:25.829285 6004 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5156bbcc6abbf40a22215f2642e81fed55f603d494e294703866350469208075\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"message\\\":\\\"able:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:10:27.532704 6150 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-c9jhc\\\\nI0129 16:10:27.533048 6150 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 16:10:27.533063 6150 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fail\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.285284 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2932c3bd-04c7-4494-8d43-03c4524a353f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tg8sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.294887 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.297570 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.297622 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.297665 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.297696 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwv7p\" (UniqueName: \"kubernetes.io/projected/791456e8-8d95-4cdb-8fd1-d06a7586b328-kube-api-access-qwv7p\") pod \"network-metrics-daemon-2w92b\" (UID: \"791456e8-8d95-4cdb-8fd1-d06a7586b328\") " pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:10:30 crc kubenswrapper[4714]: E0129 16:10:30.297704 4714 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.297726 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/791456e8-8d95-4cdb-8fd1-d06a7586b328-metrics-certs\") pod \"network-metrics-daemon-2w92b\" (UID: \"791456e8-8d95-4cdb-8fd1-d06a7586b328\") " pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:10:30 crc kubenswrapper[4714]: E0129 16:10:30.297762 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:46.297742792 +0000 UTC m=+52.818243912 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.297792 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:30 crc kubenswrapper[4714]: E0129 16:10:30.297826 4714 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:10:30 crc kubenswrapper[4714]: E0129 16:10:30.297874 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:10:30 crc kubenswrapper[4714]: E0129 16:10:30.297882 4714 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:10:30 crc kubenswrapper[4714]: E0129 16:10:30.297895 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:46.297878056 +0000 UTC m=+52.818379176 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:10:30 crc kubenswrapper[4714]: E0129 16:10:30.297901 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:10:30 crc kubenswrapper[4714]: E0129 16:10:30.297919 4714 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:30 crc kubenswrapper[4714]: E0129 16:10:30.297951 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:10:30 crc kubenswrapper[4714]: E0129 16:10:30.297965 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:10:30 crc kubenswrapper[4714]: E0129 16:10:30.297976 4714 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:30 crc kubenswrapper[4714]: E0129 16:10:30.297982 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/791456e8-8d95-4cdb-8fd1-d06a7586b328-metrics-certs podName:791456e8-8d95-4cdb-8fd1-d06a7586b328 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:30.797959588 +0000 UTC m=+37.318460718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/791456e8-8d95-4cdb-8fd1-d06a7586b328-metrics-certs") pod "network-metrics-daemon-2w92b" (UID: "791456e8-8d95-4cdb-8fd1-d06a7586b328") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:10:30 crc kubenswrapper[4714]: E0129 16:10:30.298001 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:46.297994069 +0000 UTC m=+52.818495189 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:30 crc kubenswrapper[4714]: E0129 16:10:30.298021 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:46.29801311 +0000 UTC m=+52.818514230 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.314443 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwv7p\" (UniqueName: \"kubernetes.io/projected/791456e8-8d95-4cdb-8fd1-d06a7586b328-kube-api-access-qwv7p\") pod \"network-metrics-daemon-2w92b\" (UID: \"791456e8-8d95-4cdb-8fd1-d06a7586b328\") " pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.319829 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ec9086c2f128a0ee5f564692cf28086688a45cf9d6b328c33446ad8c6f2f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.332402 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.348061 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2w92b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"791456e8-8d95-4cdb-8fd1-d06a7586b328\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2w92b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.369758 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.375571 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.375597 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.375606 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.375621 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.375630 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:30Z","lastTransitionTime":"2026-01-29T16:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.393179 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.405998 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.419066 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.434538 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.448686 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.463817 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" event={"ID":"2932c3bd-04c7-4494-8d43-03c4524a353f","Type":"ContainerStarted","Data":"dc1d7b5f32fe8de4bfd5ba555838989293be538217424c86ea5cedadb8295f43"} Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.463875 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" event={"ID":"2932c3bd-04c7-4494-8d43-03c4524a353f","Type":"ContainerStarted","Data":"a78109e9e498f3d640934a2c45faf27133d32ece9e35e42ed48ddba720fa7a10"} Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.463887 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" event={"ID":"2932c3bd-04c7-4494-8d43-03c4524a353f","Type":"ContainerStarted","Data":"e67687fd660061194ff48003829241962cc072844249c60589fca5b2a4a7d8ef"} Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.479035 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.479081 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.479091 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.479109 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.479122 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:30Z","lastTransitionTime":"2026-01-29T16:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.482648 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.494833 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.508558 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ec9086c2f128a0ee5f564692cf28086688a45cf9d6b328c33446ad8c6f2f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.520708 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.529844 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2w92b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"791456e8-8d95-4cdb-8fd1-d06a7586b328\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2w92b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.542245 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.558832 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.571646 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.581246 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.581320 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.581337 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.581361 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.581375 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:30Z","lastTransitionTime":"2026-01-29T16:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.589382 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.617891 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.635167 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.658018 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.678492 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.684362 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.684421 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.684433 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.684464 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.684482 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:30Z","lastTransitionTime":"2026-01-29T16:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.699870 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.717875 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.742969 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5156bbcc6abbf40a22215f2642e81fed55f603d494e294703866350469208075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce018ea1685c4d1a8769fd746ba32c24d3927e84ec15fd550a1a476e344ad5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"message\\\":\\\"9 16:10:25.829056 6004 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:10:25.829066 6004 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:10:25.829080 6004 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:10:25.829136 6004 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 16:10:25.829145 6004 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 16:10:25.829167 6004 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 16:10:25.829172 6004 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 16:10:25.829187 6004 factory.go:656] Stopping watch factory\\\\nI0129 16:10:25.829200 6004 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:10:25.829231 6004 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:10:25.829240 6004 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:10:25.829249 6004 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:10:25.829257 6004 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:10:25.829264 6004 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 16:10:25.829277 6004 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 16:10:25.829285 6004 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5156bbcc6abbf40a22215f2642e81fed55f603d494e294703866350469208075\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"message\\\":\\\"able:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:10:27.532704 6150 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-c9jhc\\\\nI0129 16:10:27.533048 6150 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 16:10:27.533063 6150 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fail\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.761478 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2932c3bd-04c7-4494-8d43-03c4524a353f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78109e9e498f3d640934a2c45faf27133d32ece9e35e42ed48ddba720fa7a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc1d7b5f32fe8de4bfd5ba555838989293be538217424c86ea5cedadb8295f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tg8sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.788270 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.788345 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.788373 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.788406 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.788432 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:30Z","lastTransitionTime":"2026-01-29T16:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.802347 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/791456e8-8d95-4cdb-8fd1-d06a7586b328-metrics-certs\") pod \"network-metrics-daemon-2w92b\" (UID: \"791456e8-8d95-4cdb-8fd1-d06a7586b328\") " pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:10:30 crc kubenswrapper[4714]: E0129 16:10:30.802584 4714 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:10:30 crc kubenswrapper[4714]: E0129 16:10:30.802685 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/791456e8-8d95-4cdb-8fd1-d06a7586b328-metrics-certs podName:791456e8-8d95-4cdb-8fd1-d06a7586b328 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:31.80265835 +0000 UTC m=+38.323159500 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/791456e8-8d95-4cdb-8fd1-d06a7586b328-metrics-certs") pod "network-metrics-daemon-2w92b" (UID: "791456e8-8d95-4cdb-8fd1-d06a7586b328") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.892738 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.892789 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.892882 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.892919 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.892973 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:30Z","lastTransitionTime":"2026-01-29T16:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.979820 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.979907 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.979967 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.980003 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:30 crc kubenswrapper[4714]: I0129 16:10:30.980029 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:30Z","lastTransitionTime":"2026-01-29T16:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:31 crc kubenswrapper[4714]: E0129 16:10:31.008667 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.014844 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.014917 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.014970 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.015000 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.015027 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:31Z","lastTransitionTime":"2026-01-29T16:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:31 crc kubenswrapper[4714]: E0129 16:10:31.036590 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.042276 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.042348 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.042366 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.042396 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.042453 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:31Z","lastTransitionTime":"2026-01-29T16:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:31 crc kubenswrapper[4714]: E0129 16:10:31.063705 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.069475 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.069544 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.069562 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.069594 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.069613 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:31Z","lastTransitionTime":"2026-01-29T16:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:31 crc kubenswrapper[4714]: E0129 16:10:31.094569 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.100895 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.101004 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.101026 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.101061 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.101082 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:31Z","lastTransitionTime":"2026-01-29T16:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:31 crc kubenswrapper[4714]: E0129 16:10:31.124198 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:31 crc kubenswrapper[4714]: E0129 16:10:31.124466 4714 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.127522 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.127599 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.127624 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.127658 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.127684 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:31Z","lastTransitionTime":"2026-01-29T16:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.149182 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 15:28:44.721858485 +0000 UTC Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.183703 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.183703 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.183721 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:31 crc kubenswrapper[4714]: E0129 16:10:31.184045 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:10:31 crc kubenswrapper[4714]: E0129 16:10:31.184175 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:10:31 crc kubenswrapper[4714]: E0129 16:10:31.184395 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.230834 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.230915 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.230982 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.231020 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.231045 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:31Z","lastTransitionTime":"2026-01-29T16:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.334008 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.334071 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.334088 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.334111 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.334127 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:31Z","lastTransitionTime":"2026-01-29T16:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.437047 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.437309 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.437390 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.437483 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.437578 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:31Z","lastTransitionTime":"2026-01-29T16:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.540833 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.541164 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.541269 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.541366 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.541447 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:31Z","lastTransitionTime":"2026-01-29T16:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.644382 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.644793 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.645079 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.645406 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.645613 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:31Z","lastTransitionTime":"2026-01-29T16:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.748863 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.748926 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.748990 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.749032 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.749058 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:31Z","lastTransitionTime":"2026-01-29T16:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.814514 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/791456e8-8d95-4cdb-8fd1-d06a7586b328-metrics-certs\") pod \"network-metrics-daemon-2w92b\" (UID: \"791456e8-8d95-4cdb-8fd1-d06a7586b328\") " pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:10:31 crc kubenswrapper[4714]: E0129 16:10:31.814741 4714 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:10:31 crc kubenswrapper[4714]: E0129 16:10:31.814868 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/791456e8-8d95-4cdb-8fd1-d06a7586b328-metrics-certs podName:791456e8-8d95-4cdb-8fd1-d06a7586b328 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:33.814838924 +0000 UTC m=+40.335340084 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/791456e8-8d95-4cdb-8fd1-d06a7586b328-metrics-certs") pod "network-metrics-daemon-2w92b" (UID: "791456e8-8d95-4cdb-8fd1-d06a7586b328") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.852439 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.852490 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.852508 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.852531 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.852548 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:31Z","lastTransitionTime":"2026-01-29T16:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.955187 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.955243 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.955261 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.955284 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:31 crc kubenswrapper[4714]: I0129 16:10:31.955303 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:31Z","lastTransitionTime":"2026-01-29T16:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.057654 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.057726 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.057754 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.057784 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.057808 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:32Z","lastTransitionTime":"2026-01-29T16:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.150100 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 22:40:02.755402892 +0000 UTC Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.160620 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.160688 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.160714 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.160740 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.160821 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:32Z","lastTransitionTime":"2026-01-29T16:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.183330 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:10:32 crc kubenswrapper[4714]: E0129 16:10:32.183596 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.250431 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.251377 4714 scope.go:117] "RemoveContainer" containerID="5156bbcc6abbf40a22215f2642e81fed55f603d494e294703866350469208075" Jan 29 16:10:32 crc kubenswrapper[4714]: E0129 16:10:32.251720 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-sbnkt_openshift-ovn-kubernetes(04b20f02-6c1e-4082-8233-8f06bda63195)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.264054 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.264111 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.264131 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.264154 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.264170 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:32Z","lastTransitionTime":"2026-01-29T16:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.278890 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.299147 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.313294 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.339200 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.365019 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.366503 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.366536 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.366549 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.366567 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.366580 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:32Z","lastTransitionTime":"2026-01-29T16:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.395668 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.418069 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5156bbcc6abbf40a22215f2642e81fed55f603d494e294703866350469208075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5156bbcc6abbf40a22215f2642e81fed55f603d494e294703866350469208075\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"message\\\":\\\"able:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:10:27.532704 6150 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-c9jhc\\\\nI0129 16:10:27.533048 6150 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 16:10:27.533063 6150 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fail\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-sbnkt_openshift-ovn-kubernetes(04b20f02-6c1e-4082-8233-8f06bda63195)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.431499 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2932c3bd-04c7-4494-8d43-03c4524a353f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78109e9e498f3d640934a2c45faf27133d32ece9e35e42ed48ddba720fa7a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc1d7b5f32fe8de4bfd5ba555838989293be538217424c86ea5cedadb8295f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tg8sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.449141 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.469649 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.469689 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.469699 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.469715 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.469726 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:32Z","lastTransitionTime":"2026-01-29T16:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.473204 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.497684 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ec9086c2f128a0ee5f564692cf28086688a45cf9d6b328c33446ad8c6f2f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.515775 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.532373 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2w92b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"791456e8-8d95-4cdb-8fd1-d06a7586b328\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2w92b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.552563 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.572654 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.572815 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.572839 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.572880 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.572904 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:32Z","lastTransitionTime":"2026-01-29T16:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.573741 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.590780 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.610324 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.675759 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.675817 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.675835 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.675857 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.675874 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:32Z","lastTransitionTime":"2026-01-29T16:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.779516 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.779583 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.779602 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.779627 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.779644 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:32Z","lastTransitionTime":"2026-01-29T16:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.882074 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.882114 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.882125 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.882144 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.882156 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:32Z","lastTransitionTime":"2026-01-29T16:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.985449 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.985515 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.985533 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.985558 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:32 crc kubenswrapper[4714]: I0129 16:10:32.985575 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:32Z","lastTransitionTime":"2026-01-29T16:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.089042 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.089106 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.089129 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.089160 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.089183 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:33Z","lastTransitionTime":"2026-01-29T16:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.151106 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 22:54:09.081811871 +0000 UTC Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.183647 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.183676 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:33 crc kubenswrapper[4714]: E0129 16:10:33.183846 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.183682 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:33 crc kubenswrapper[4714]: E0129 16:10:33.184105 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:10:33 crc kubenswrapper[4714]: E0129 16:10:33.184218 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.192610 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.192716 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.192743 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.192775 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.192797 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:33Z","lastTransitionTime":"2026-01-29T16:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.296827 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.296874 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.296890 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.296911 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.296926 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:33Z","lastTransitionTime":"2026-01-29T16:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.399582 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.399643 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.399660 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.399687 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.399703 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:33Z","lastTransitionTime":"2026-01-29T16:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.503380 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.503435 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.503449 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.503469 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.503482 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:33Z","lastTransitionTime":"2026-01-29T16:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.606208 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.606250 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.606260 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.606275 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.606286 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:33Z","lastTransitionTime":"2026-01-29T16:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.708843 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.708972 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.708994 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.709018 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.709039 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:33Z","lastTransitionTime":"2026-01-29T16:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.812495 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.812566 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.812586 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.812612 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.812630 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:33Z","lastTransitionTime":"2026-01-29T16:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.835419 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/791456e8-8d95-4cdb-8fd1-d06a7586b328-metrics-certs\") pod \"network-metrics-daemon-2w92b\" (UID: \"791456e8-8d95-4cdb-8fd1-d06a7586b328\") " pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:10:33 crc kubenswrapper[4714]: E0129 16:10:33.835609 4714 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:10:33 crc kubenswrapper[4714]: E0129 16:10:33.835691 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/791456e8-8d95-4cdb-8fd1-d06a7586b328-metrics-certs podName:791456e8-8d95-4cdb-8fd1-d06a7586b328 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:37.835667981 +0000 UTC m=+44.356169131 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/791456e8-8d95-4cdb-8fd1-d06a7586b328-metrics-certs") pod "network-metrics-daemon-2w92b" (UID: "791456e8-8d95-4cdb-8fd1-d06a7586b328") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.915412 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.915487 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.915506 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.915531 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:33 crc kubenswrapper[4714]: I0129 16:10:33.915548 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:33Z","lastTransitionTime":"2026-01-29T16:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.018704 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.018769 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.018789 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.018812 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.018848 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:34Z","lastTransitionTime":"2026-01-29T16:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.122819 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.122883 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.122901 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.122924 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.123037 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:34Z","lastTransitionTime":"2026-01-29T16:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.151573 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 21:36:20.812027632 +0000 UTC Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.184231 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:10:34 crc kubenswrapper[4714]: E0129 16:10:34.184825 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.218200 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5156bbcc6abbf40a22215f2642e81fed55f603d494e294703866350469208075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5156bbcc6abbf40a22215f2642e81fed55f603d494e294703866350469208075\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"message\\\":\\\"able:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:10:27.532704 6150 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-c9jhc\\\\nI0129 16:10:27.533048 6150 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 16:10:27.533063 6150 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fail\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-sbnkt_openshift-ovn-kubernetes(04b20f02-6c1e-4082-8233-8f06bda63195)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.225631 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.225690 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.225709 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.225739 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.225757 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:34Z","lastTransitionTime":"2026-01-29T16:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.239198 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2932c3bd-04c7-4494-8d43-03c4524a353f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78109e9e498f3d640934a2c45faf27133d32ece9e35e42ed48ddba720fa7a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc1d7b5f32fe8de4bfd5ba555838989293be538217424c86ea5cedadb8295f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tg8sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.259188 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.280210 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.297806 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.313242 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2w92b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"791456e8-8d95-4cdb-8fd1-d06a7586b328\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2w92b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.330234 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.330344 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.330420 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.330507 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.330534 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:34Z","lastTransitionTime":"2026-01-29T16:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.331597 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.346269 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.370191 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ec9086c2f128a0ee5f564692cf28086688a45cf9d6b328c33446ad8c6f2f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.390385 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.406594 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.420217 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.430774 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.432872 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.432919 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.432957 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.432981 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.432993 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:34Z","lastTransitionTime":"2026-01-29T16:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.446994 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.460418 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.482000 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.493964 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.535027 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.535079 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.535088 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.535101 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.535111 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:34Z","lastTransitionTime":"2026-01-29T16:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.637508 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.637572 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.637589 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.637615 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.637635 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:34Z","lastTransitionTime":"2026-01-29T16:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.741241 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.741309 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.741335 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.741366 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.741388 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:34Z","lastTransitionTime":"2026-01-29T16:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.844546 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.844602 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.844621 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.844653 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.844676 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:34Z","lastTransitionTime":"2026-01-29T16:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.947746 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.947810 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.947826 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.947849 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:34 crc kubenswrapper[4714]: I0129 16:10:34.947866 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:34Z","lastTransitionTime":"2026-01-29T16:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.051110 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.051172 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.051190 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.051212 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.051228 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:35Z","lastTransitionTime":"2026-01-29T16:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.151730 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 22:52:08.830148651 +0000 UTC Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.153770 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.153817 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.153834 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.153863 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.153884 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:35Z","lastTransitionTime":"2026-01-29T16:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.183757 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.183846 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.183882 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:35 crc kubenswrapper[4714]: E0129 16:10:35.184003 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:10:35 crc kubenswrapper[4714]: E0129 16:10:35.184120 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:10:35 crc kubenswrapper[4714]: E0129 16:10:35.184224 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.255970 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.256003 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.256014 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.256027 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.256037 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:35Z","lastTransitionTime":"2026-01-29T16:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.358607 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.358686 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.358704 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.358728 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.358744 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:35Z","lastTransitionTime":"2026-01-29T16:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.461646 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.461714 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.461730 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.461756 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.461774 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:35Z","lastTransitionTime":"2026-01-29T16:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.565101 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.565210 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.565237 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.565309 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.565331 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:35Z","lastTransitionTime":"2026-01-29T16:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.668418 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.668489 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.668511 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.668546 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.668588 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:35Z","lastTransitionTime":"2026-01-29T16:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.771783 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.771840 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.771859 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.771889 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.771908 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:35Z","lastTransitionTime":"2026-01-29T16:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.875911 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.876007 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.876026 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.876050 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.876067 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:35Z","lastTransitionTime":"2026-01-29T16:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.979669 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.979736 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.979752 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.979776 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:35 crc kubenswrapper[4714]: I0129 16:10:35.979793 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:35Z","lastTransitionTime":"2026-01-29T16:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.082276 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.082351 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.082376 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.082410 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.082434 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:36Z","lastTransitionTime":"2026-01-29T16:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.152292 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 09:56:56.443823918 +0000 UTC Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.183197 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:10:36 crc kubenswrapper[4714]: E0129 16:10:36.183448 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.185503 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.185565 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.185581 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.185609 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.185627 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:36Z","lastTransitionTime":"2026-01-29T16:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.289316 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.289382 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.289398 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.289425 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.289442 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:36Z","lastTransitionTime":"2026-01-29T16:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.394611 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.394696 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.394723 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.394754 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.394779 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:36Z","lastTransitionTime":"2026-01-29T16:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.501692 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.501766 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.501785 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.501809 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.501827 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:36Z","lastTransitionTime":"2026-01-29T16:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.605129 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.605198 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.605219 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.605243 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.605261 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:36Z","lastTransitionTime":"2026-01-29T16:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.708697 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.708749 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.708762 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.708779 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.708790 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:36Z","lastTransitionTime":"2026-01-29T16:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.811866 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.811974 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.812003 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.812035 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.812056 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:36Z","lastTransitionTime":"2026-01-29T16:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.915895 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.916260 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.916276 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.916299 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:36 crc kubenswrapper[4714]: I0129 16:10:36.916326 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:36Z","lastTransitionTime":"2026-01-29T16:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.019578 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.019651 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.019673 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.019702 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.019719 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:37Z","lastTransitionTime":"2026-01-29T16:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.123230 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.123289 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.123305 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.123326 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.123343 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:37Z","lastTransitionTime":"2026-01-29T16:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.152713 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 20:11:28.548799934 +0000 UTC Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.183241 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.183274 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.183274 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:37 crc kubenswrapper[4714]: E0129 16:10:37.183453 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:10:37 crc kubenswrapper[4714]: E0129 16:10:37.183584 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:10:37 crc kubenswrapper[4714]: E0129 16:10:37.183669 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.226172 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.226257 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.226281 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.226311 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.226332 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:37Z","lastTransitionTime":"2026-01-29T16:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.329886 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.329978 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.329996 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.330020 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.330057 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:37Z","lastTransitionTime":"2026-01-29T16:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.433496 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.433578 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.433607 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.433710 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.433748 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:37Z","lastTransitionTime":"2026-01-29T16:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.537813 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.537896 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.537921 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.538002 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.538028 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:37Z","lastTransitionTime":"2026-01-29T16:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.640844 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.640884 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.640897 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.640913 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.640924 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:37Z","lastTransitionTime":"2026-01-29T16:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.743317 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.743359 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.743370 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.743385 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.743396 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:37Z","lastTransitionTime":"2026-01-29T16:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.847615 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.847697 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.847722 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.847754 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.847779 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:37Z","lastTransitionTime":"2026-01-29T16:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.879521 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/791456e8-8d95-4cdb-8fd1-d06a7586b328-metrics-certs\") pod \"network-metrics-daemon-2w92b\" (UID: \"791456e8-8d95-4cdb-8fd1-d06a7586b328\") " pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:10:37 crc kubenswrapper[4714]: E0129 16:10:37.879763 4714 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:10:37 crc kubenswrapper[4714]: E0129 16:10:37.879908 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/791456e8-8d95-4cdb-8fd1-d06a7586b328-metrics-certs podName:791456e8-8d95-4cdb-8fd1-d06a7586b328 nodeName:}" failed. No retries permitted until 2026-01-29 16:10:45.879871898 +0000 UTC m=+52.400373058 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/791456e8-8d95-4cdb-8fd1-d06a7586b328-metrics-certs") pod "network-metrics-daemon-2w92b" (UID: "791456e8-8d95-4cdb-8fd1-d06a7586b328") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.951213 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.951279 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.951296 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.951321 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:37 crc kubenswrapper[4714]: I0129 16:10:37.951340 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:37Z","lastTransitionTime":"2026-01-29T16:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.055090 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.055154 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.055178 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.055210 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.055236 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:38Z","lastTransitionTime":"2026-01-29T16:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.153510 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 07:48:14.576587515 +0000 UTC Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.157774 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.157830 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.157850 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.157875 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.157892 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:38Z","lastTransitionTime":"2026-01-29T16:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.183484 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:10:38 crc kubenswrapper[4714]: E0129 16:10:38.183669 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.261141 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.261209 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.261233 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.261294 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.261317 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:38Z","lastTransitionTime":"2026-01-29T16:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.363492 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.363531 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.363541 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.363557 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.363568 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:38Z","lastTransitionTime":"2026-01-29T16:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.466918 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.467018 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.467036 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.467061 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.467078 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:38Z","lastTransitionTime":"2026-01-29T16:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.570374 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.570433 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.570449 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.570476 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.570494 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:38Z","lastTransitionTime":"2026-01-29T16:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.674158 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.674259 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.674277 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.674299 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.674316 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:38Z","lastTransitionTime":"2026-01-29T16:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.777424 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.777504 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.777525 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.777552 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.777570 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:38Z","lastTransitionTime":"2026-01-29T16:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.880322 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.880367 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.880383 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.880398 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.880409 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:38Z","lastTransitionTime":"2026-01-29T16:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.985432 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.985480 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.985492 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.985509 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:38 crc kubenswrapper[4714]: I0129 16:10:38.985519 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:38Z","lastTransitionTime":"2026-01-29T16:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.087693 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.087732 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.087743 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.087760 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.087772 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:39Z","lastTransitionTime":"2026-01-29T16:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.154201 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 14:21:24.805530221 +0000 UTC Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.183849 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.183875 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.183953 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:39 crc kubenswrapper[4714]: E0129 16:10:39.184084 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:10:39 crc kubenswrapper[4714]: E0129 16:10:39.184206 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:10:39 crc kubenswrapper[4714]: E0129 16:10:39.184330 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.191205 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.191235 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.191244 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.191257 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.191265 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:39Z","lastTransitionTime":"2026-01-29T16:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.294326 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.294385 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.294403 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.294425 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.294443 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:39Z","lastTransitionTime":"2026-01-29T16:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.397585 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.397727 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.397750 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.397777 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.397794 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:39Z","lastTransitionTime":"2026-01-29T16:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.500995 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.501056 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.501075 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.501099 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.501116 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:39Z","lastTransitionTime":"2026-01-29T16:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.604993 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.605059 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.605080 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.605111 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.605133 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:39Z","lastTransitionTime":"2026-01-29T16:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.708388 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.708709 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.708840 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.709006 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.709033 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:39Z","lastTransitionTime":"2026-01-29T16:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.811621 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.811682 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.811703 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.811728 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.811748 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:39Z","lastTransitionTime":"2026-01-29T16:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.915009 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.915097 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.915117 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.915140 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:39 crc kubenswrapper[4714]: I0129 16:10:39.915157 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:39Z","lastTransitionTime":"2026-01-29T16:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.017706 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.017792 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.017813 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.017842 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.017865 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:40Z","lastTransitionTime":"2026-01-29T16:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.121313 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.121414 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.121435 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.121514 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.121536 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:40Z","lastTransitionTime":"2026-01-29T16:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.154732 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 20:41:24.497319297 +0000 UTC Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.183767 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:10:40 crc kubenswrapper[4714]: E0129 16:10:40.184625 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.224765 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.224814 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.224830 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.224855 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.224874 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:40Z","lastTransitionTime":"2026-01-29T16:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.327794 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.327870 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.327892 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.327922 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.327972 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:40Z","lastTransitionTime":"2026-01-29T16:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.431545 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.431847 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.432036 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.432186 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.432401 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:40Z","lastTransitionTime":"2026-01-29T16:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.536597 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.536667 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.536693 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.536724 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.536748 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:40Z","lastTransitionTime":"2026-01-29T16:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.640299 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.640379 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.640413 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.640442 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.640463 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:40Z","lastTransitionTime":"2026-01-29T16:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.743369 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.743455 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.743481 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.743506 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.743524 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:40Z","lastTransitionTime":"2026-01-29T16:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.847355 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.847420 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.847441 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.847467 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.847484 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:40Z","lastTransitionTime":"2026-01-29T16:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.951378 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.951444 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.951462 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.951487 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:40 crc kubenswrapper[4714]: I0129 16:10:40.951506 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:40Z","lastTransitionTime":"2026-01-29T16:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.055588 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.055672 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.055699 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.055730 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.055755 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:41Z","lastTransitionTime":"2026-01-29T16:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.155045 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 13:32:15.739434184 +0000 UTC Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.158350 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.158410 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.158427 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.158453 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.158471 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:41Z","lastTransitionTime":"2026-01-29T16:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.184319 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.184386 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.184329 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:41 crc kubenswrapper[4714]: E0129 16:10:41.184513 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:10:41 crc kubenswrapper[4714]: E0129 16:10:41.184664 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:10:41 crc kubenswrapper[4714]: E0129 16:10:41.184795 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.263095 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.263198 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.263212 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.263233 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.263247 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:41Z","lastTransitionTime":"2026-01-29T16:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.367166 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.367234 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.367248 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.367272 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.367290 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:41Z","lastTransitionTime":"2026-01-29T16:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.458412 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.461631 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.461707 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.461725 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.461753 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.461777 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:41Z","lastTransitionTime":"2026-01-29T16:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.471719 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.478630 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:41 crc kubenswrapper[4714]: E0129 16:10:41.485655 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.492507 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.492576 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.492598 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.492628 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.492651 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:41Z","lastTransitionTime":"2026-01-29T16:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.504489 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:41 crc kubenswrapper[4714]: E0129 16:10:41.519004 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.524122 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.524216 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.524236 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.524261 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.524280 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:41Z","lastTransitionTime":"2026-01-29T16:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.531456 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ec9086c2f128a0ee5f564692cf28086688a45cf9d6b328c33446ad8c6f2f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:41 crc kubenswrapper[4714]: E0129 16:10:41.544856 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.548106 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.550630 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.550695 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.550714 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.550743 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.550764 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:41Z","lastTransitionTime":"2026-01-29T16:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.563438 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2w92b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"791456e8-8d95-4cdb-8fd1-d06a7586b328\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2w92b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:41 crc kubenswrapper[4714]: E0129 16:10:41.572685 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.577860 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.577916 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.577969 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.577999 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.578021 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:41Z","lastTransitionTime":"2026-01-29T16:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.582225 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.599408 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:41 crc kubenswrapper[4714]: E0129 16:10:41.600947 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:41 crc kubenswrapper[4714]: E0129 16:10:41.601324 4714 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.603340 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.603371 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.603386 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.603410 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.603425 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:41Z","lastTransitionTime":"2026-01-29T16:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.616419 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.636305 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.667735 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.689232 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.705717 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.705763 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.705780 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.705802 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.705818 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:41Z","lastTransitionTime":"2026-01-29T16:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.710751 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.728006 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.744342 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.759498 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.796335 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5156bbcc6abbf40a22215f2642e81fed55f603d494e294703866350469208075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5156bbcc6abbf40a22215f2642e81fed55f603d494e294703866350469208075\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"message\\\":\\\"able:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:10:27.532704 6150 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-c9jhc\\\\nI0129 16:10:27.533048 6150 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 16:10:27.533063 6150 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fail\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-sbnkt_openshift-ovn-kubernetes(04b20f02-6c1e-4082-8233-8f06bda63195)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.809524 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.809578 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.809596 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.809621 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.809638 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:41Z","lastTransitionTime":"2026-01-29T16:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.818167 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2932c3bd-04c7-4494-8d43-03c4524a353f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78109e9e498f3d640934a2c45faf27133d32ece9e35e42ed48ddba720fa7a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc1d7b5f32fe8de4bfd5ba555838989293be538217424c86ea5cedadb8295f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tg8sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.913111 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.913559 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.913686 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.913795 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:41 crc kubenswrapper[4714]: I0129 16:10:41.913881 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:41Z","lastTransitionTime":"2026-01-29T16:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.018070 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.018139 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.018157 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.018183 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.018201 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:42Z","lastTransitionTime":"2026-01-29T16:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.121896 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.122021 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.122075 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.122133 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.122209 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:42Z","lastTransitionTime":"2026-01-29T16:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.155665 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 13:58:27.017021467 +0000 UTC Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.184355 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:10:42 crc kubenswrapper[4714]: E0129 16:10:42.184683 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.225741 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.225838 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.225866 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.225897 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.225920 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:42Z","lastTransitionTime":"2026-01-29T16:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.329486 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.329556 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.329577 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.329611 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.329633 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:42Z","lastTransitionTime":"2026-01-29T16:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.432135 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.432210 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.432240 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.432273 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.432295 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:42Z","lastTransitionTime":"2026-01-29T16:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.535411 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.535807 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.536053 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.536346 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.536743 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:42Z","lastTransitionTime":"2026-01-29T16:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.639770 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.639827 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.639845 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.639868 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.639885 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:42Z","lastTransitionTime":"2026-01-29T16:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.743416 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.743784 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.743988 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.744212 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.744423 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:42Z","lastTransitionTime":"2026-01-29T16:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.848209 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.848277 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.848295 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.848321 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.848340 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:42Z","lastTransitionTime":"2026-01-29T16:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.952044 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.952112 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.952129 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.952153 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:42 crc kubenswrapper[4714]: I0129 16:10:42.952174 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:42Z","lastTransitionTime":"2026-01-29T16:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.055037 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.055102 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.055127 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.055156 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.055180 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:43Z","lastTransitionTime":"2026-01-29T16:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.156377 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 11:13:54.491823218 +0000 UTC Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.158824 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.158877 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.158894 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.158918 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.158989 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:43Z","lastTransitionTime":"2026-01-29T16:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.183357 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.183423 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.183364 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:43 crc kubenswrapper[4714]: E0129 16:10:43.183551 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:10:43 crc kubenswrapper[4714]: E0129 16:10:43.183711 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:10:43 crc kubenswrapper[4714]: E0129 16:10:43.183830 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.262246 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.262321 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.262342 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.262367 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.262386 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:43Z","lastTransitionTime":"2026-01-29T16:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.366071 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.366144 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.366162 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.366189 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.366207 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:43Z","lastTransitionTime":"2026-01-29T16:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.469639 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.469699 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.469715 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.469747 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.469763 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:43Z","lastTransitionTime":"2026-01-29T16:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.572119 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.572186 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.572210 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.572240 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.572263 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:43Z","lastTransitionTime":"2026-01-29T16:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.675377 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.675424 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.675440 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.675463 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.675482 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:43Z","lastTransitionTime":"2026-01-29T16:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.781133 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.781190 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.781203 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.781224 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.781237 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:43Z","lastTransitionTime":"2026-01-29T16:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.884349 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.884411 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.884431 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.884456 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.884474 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:43Z","lastTransitionTime":"2026-01-29T16:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.987475 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.987530 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.987548 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.987570 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:43 crc kubenswrapper[4714]: I0129 16:10:43.987587 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:43Z","lastTransitionTime":"2026-01-29T16:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.091176 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.091243 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.091263 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.091287 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.091307 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:44Z","lastTransitionTime":"2026-01-29T16:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.157155 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 00:26:57.601562412 +0000 UTC Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.183809 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:10:44 crc kubenswrapper[4714]: E0129 16:10:44.184049 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.193616 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.193677 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.193699 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.193727 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.193749 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:44Z","lastTransitionTime":"2026-01-29T16:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.207701 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2932c3bd-04c7-4494-8d43-03c4524a353f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78109e9e498f3d640934a2c45faf27133d32ece9e35e42ed48ddba720fa7a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc1d7b5f32fe8de4bfd5ba555838989293be538217424c86ea5cedadb8295f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tg8sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:44Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.230902 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89403a5-379d-4c3f-a87f-8d2ed63ab368\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f0bdc60ea5a5e188b2ed8894bb387084567b294c3a356fed01493d4bd6a7caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6068173079b1d5cf7de92fe34bd0a4701b11a4dba7ae384f2d0e33f185656107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6565dc4c35c618a239bdc2be6dd45a9057e83573c92c3fc816350eb014c822c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:44Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.253189 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:44Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.269974 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:44Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.296755 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.296806 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.296825 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.296850 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.296866 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:44Z","lastTransitionTime":"2026-01-29T16:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.302813 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5156bbcc6abbf40a22215f2642e81fed55f603d494e294703866350469208075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5156bbcc6abbf40a22215f2642e81fed55f603d494e294703866350469208075\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"message\\\":\\\"able:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:10:27.532704 6150 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-c9jhc\\\\nI0129 16:10:27.533048 6150 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 16:10:27.533063 6150 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fail\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-sbnkt_openshift-ovn-kubernetes(04b20f02-6c1e-4082-8233-8f06bda63195)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:44Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.320874 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2w92b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"791456e8-8d95-4cdb-8fd1-d06a7586b328\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2w92b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:44Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.340627 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:44Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.359771 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:44Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.382636 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ec9086c2f128a0ee5f564692cf28086688a45cf9d6b328c33446ad8c6f2f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:44Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.400243 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.400289 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.400305 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.400325 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.400341 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:44Z","lastTransitionTime":"2026-01-29T16:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.400504 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:44Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.423989 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:44Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.444191 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:44Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.460891 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:44Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.480593 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:44Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.497976 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:44Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.502873 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.502962 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.502983 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.503012 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.503033 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:44Z","lastTransitionTime":"2026-01-29T16:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.522628 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:44Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.546416 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:44Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.567719 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:44Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.606040 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.606104 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.606131 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.606162 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.606186 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:44Z","lastTransitionTime":"2026-01-29T16:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.709917 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.710032 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.710051 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.710081 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.710100 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:44Z","lastTransitionTime":"2026-01-29T16:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.812963 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.813023 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.813047 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.813079 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.813100 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:44Z","lastTransitionTime":"2026-01-29T16:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.915823 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.915907 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.915921 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.915975 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:44 crc kubenswrapper[4714]: I0129 16:10:44.915994 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:44Z","lastTransitionTime":"2026-01-29T16:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.019361 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.019424 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.019436 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.019456 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.019468 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:45Z","lastTransitionTime":"2026-01-29T16:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.122363 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.122426 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.122444 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.122468 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.122486 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:45Z","lastTransitionTime":"2026-01-29T16:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.157761 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 16:03:11.704903473 +0000 UTC Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.183417 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.183501 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.183438 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:45 crc kubenswrapper[4714]: E0129 16:10:45.183588 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:10:45 crc kubenswrapper[4714]: E0129 16:10:45.183693 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:10:45 crc kubenswrapper[4714]: E0129 16:10:45.183872 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.226659 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.226743 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.226763 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.227701 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.227796 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:45Z","lastTransitionTime":"2026-01-29T16:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.330304 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.330358 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.330371 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.330393 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.330409 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:45Z","lastTransitionTime":"2026-01-29T16:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.433517 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.433561 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.433574 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.433594 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.433604 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:45Z","lastTransitionTime":"2026-01-29T16:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.537793 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.537897 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.537908 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.537948 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.537961 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:45Z","lastTransitionTime":"2026-01-29T16:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.641120 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.641171 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.641188 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.641210 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.641227 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:45Z","lastTransitionTime":"2026-01-29T16:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.744057 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.744108 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.744122 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.744140 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.744151 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:45Z","lastTransitionTime":"2026-01-29T16:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.849406 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.849511 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.849531 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.849567 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.849589 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:45Z","lastTransitionTime":"2026-01-29T16:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.952671 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.952728 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.952751 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.952780 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.952805 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:45Z","lastTransitionTime":"2026-01-29T16:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:45 crc kubenswrapper[4714]: I0129 16:10:45.972011 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/791456e8-8d95-4cdb-8fd1-d06a7586b328-metrics-certs\") pod \"network-metrics-daemon-2w92b\" (UID: \"791456e8-8d95-4cdb-8fd1-d06a7586b328\") " pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:10:45 crc kubenswrapper[4714]: E0129 16:10:45.972219 4714 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:10:45 crc kubenswrapper[4714]: E0129 16:10:45.972310 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/791456e8-8d95-4cdb-8fd1-d06a7586b328-metrics-certs podName:791456e8-8d95-4cdb-8fd1-d06a7586b328 nodeName:}" failed. No retries permitted until 2026-01-29 16:11:01.97228666 +0000 UTC m=+68.492787780 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/791456e8-8d95-4cdb-8fd1-d06a7586b328-metrics-certs") pod "network-metrics-daemon-2w92b" (UID: "791456e8-8d95-4cdb-8fd1-d06a7586b328") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.056578 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.056628 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.056641 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.056658 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.056670 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:46Z","lastTransitionTime":"2026-01-29T16:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.157985 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 05:08:13.085978649 +0000 UTC Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.160523 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.160583 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.160604 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.160628 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.160645 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:46Z","lastTransitionTime":"2026-01-29T16:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.183813 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:10:46 crc kubenswrapper[4714]: E0129 16:10:46.184400 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.184733 4714 scope.go:117] "RemoveContainer" containerID="5156bbcc6abbf40a22215f2642e81fed55f603d494e294703866350469208075" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.263778 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.263845 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.263863 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.263891 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.263914 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:46Z","lastTransitionTime":"2026-01-29T16:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.275504 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:10:46 crc kubenswrapper[4714]: E0129 16:10:46.275672 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:11:18.275640796 +0000 UTC m=+84.796141946 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.367114 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.367391 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.367523 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.367659 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.367775 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:46Z","lastTransitionTime":"2026-01-29T16:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.376799 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.376873 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.376921 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:46 crc kubenswrapper[4714]: E0129 16:10:46.377213 4714 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:10:46 crc kubenswrapper[4714]: E0129 16:10:46.377303 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:11:18.377279047 +0000 UTC m=+84.897780197 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:10:46 crc kubenswrapper[4714]: E0129 16:10:46.377345 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:10:46 crc kubenswrapper[4714]: E0129 16:10:46.377388 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:10:46 crc kubenswrapper[4714]: E0129 16:10:46.377484 4714 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:46 crc kubenswrapper[4714]: E0129 16:10:46.377497 4714 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:10:46 crc kubenswrapper[4714]: E0129 16:10:46.377584 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:11:18.377557765 +0000 UTC m=+84.898058915 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:10:46 crc kubenswrapper[4714]: E0129 16:10:46.377611 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:11:18.377599346 +0000 UTC m=+84.898100496 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:46 crc kubenswrapper[4714]: E0129 16:10:46.377642 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:10:46 crc kubenswrapper[4714]: E0129 16:10:46.377697 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:10:46 crc kubenswrapper[4714]: E0129 16:10:46.377728 4714 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:46 crc kubenswrapper[4714]: E0129 16:10:46.377824 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:11:18.377796982 +0000 UTC m=+84.898298142 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.377016 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.471761 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.471814 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.471833 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.471857 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.471874 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:46Z","lastTransitionTime":"2026-01-29T16:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.537282 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sbnkt_04b20f02-6c1e-4082-8233-8f06bda63195/ovnkube-controller/1.log" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.541493 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" event={"ID":"04b20f02-6c1e-4082-8233-8f06bda63195","Type":"ContainerStarted","Data":"98d5d7851e659633c2e0da4975fa0e9e36d8d1f1b9d725661500f818f6268bcc"} Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.541907 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.564688 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.574977 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.575007 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.575019 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.575036 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.575048 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:46Z","lastTransitionTime":"2026-01-29T16:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.588275 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.616168 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ec9086c2f128a0ee5f564692cf28086688a45cf9d6b328c33446ad8c6f2f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.641981 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.674065 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2w92b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"791456e8-8d95-4cdb-8fd1-d06a7586b328\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2w92b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.678213 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.678259 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.678271 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.678290 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.678307 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:46Z","lastTransitionTime":"2026-01-29T16:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.701708 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.723419 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.740046 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.760986 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.783774 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.783822 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.783835 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.783855 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.783871 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:46Z","lastTransitionTime":"2026-01-29T16:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.791994 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.812115 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.840145 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.858517 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.873051 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89403a5-379d-4c3f-a87f-8d2ed63ab368\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f0bdc60ea5a5e188b2ed8894bb387084567b294c3a356fed01493d4bd6a7caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6068173079b1d5cf7de92fe34bd0a4701b11a4dba7ae384f2d0e33f185656107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6565dc4c35c618a239bdc2be6dd45a9057e83573c92c3fc816350eb014c822c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.887350 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.887390 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.887403 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.887435 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.887450 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:46Z","lastTransitionTime":"2026-01-29T16:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.892990 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.906197 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.928264 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d5d7851e659633c2e0da4975fa0e9e36d8d1f1b9d725661500f818f6268bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5156bbcc6abbf40a22215f2642e81fed55f603d494e294703866350469208075\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"message\\\":\\\"able:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:10:27.532704 6150 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-c9jhc\\\\nI0129 16:10:27.533048 6150 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 16:10:27.533063 6150 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fail\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.940380 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2932c3bd-04c7-4494-8d43-03c4524a353f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78109e9e498f3d640934a2c45faf27133d32ece9e35e42ed48ddba720fa7a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc1d7b5f32fe8de4bfd5ba555838989293be538217424c86ea5cedadb8295f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tg8sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.990846 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.990905 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.990928 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.990982 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:46 crc kubenswrapper[4714]: I0129 16:10:46.991002 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:46Z","lastTransitionTime":"2026-01-29T16:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.094257 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.094309 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.094325 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.094344 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.094360 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:47Z","lastTransitionTime":"2026-01-29T16:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.158290 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 21:50:29.457942339 +0000 UTC Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.183735 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.183771 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.183801 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:47 crc kubenswrapper[4714]: E0129 16:10:47.183946 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:10:47 crc kubenswrapper[4714]: E0129 16:10:47.184022 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:10:47 crc kubenswrapper[4714]: E0129 16:10:47.184258 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.197865 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.197901 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.197912 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.197948 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.197961 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:47Z","lastTransitionTime":"2026-01-29T16:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.301827 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.301882 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.301893 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.301914 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.301944 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:47Z","lastTransitionTime":"2026-01-29T16:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.405639 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.405717 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.405732 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.405756 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.405771 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:47Z","lastTransitionTime":"2026-01-29T16:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.508551 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.508610 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.508627 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.508653 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.508672 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:47Z","lastTransitionTime":"2026-01-29T16:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.550121 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sbnkt_04b20f02-6c1e-4082-8233-8f06bda63195/ovnkube-controller/2.log" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.551589 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sbnkt_04b20f02-6c1e-4082-8233-8f06bda63195/ovnkube-controller/1.log" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.557550 4714 generic.go:334] "Generic (PLEG): container finished" podID="04b20f02-6c1e-4082-8233-8f06bda63195" containerID="98d5d7851e659633c2e0da4975fa0e9e36d8d1f1b9d725661500f818f6268bcc" exitCode=1 Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.557629 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" event={"ID":"04b20f02-6c1e-4082-8233-8f06bda63195","Type":"ContainerDied","Data":"98d5d7851e659633c2e0da4975fa0e9e36d8d1f1b9d725661500f818f6268bcc"} Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.557716 4714 scope.go:117] "RemoveContainer" containerID="5156bbcc6abbf40a22215f2642e81fed55f603d494e294703866350469208075" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.558965 4714 scope.go:117] "RemoveContainer" containerID="98d5d7851e659633c2e0da4975fa0e9e36d8d1f1b9d725661500f818f6268bcc" Jan 29 16:10:47 crc kubenswrapper[4714]: E0129 16:10:47.559332 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sbnkt_openshift-ovn-kubernetes(04b20f02-6c1e-4082-8233-8f06bda63195)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.583621 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.607132 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.611561 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.611613 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.611632 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.611655 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.611671 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:47Z","lastTransitionTime":"2026-01-29T16:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.625082 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.646352 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.679495 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.700543 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.715218 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.715288 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.715311 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.715342 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.715364 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:47Z","lastTransitionTime":"2026-01-29T16:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.722744 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.741460 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.760892 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89403a5-379d-4c3f-a87f-8d2ed63ab368\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f0bdc60ea5a5e188b2ed8894bb387084567b294c3a356fed01493d4bd6a7caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6068173079b1d5cf7de92fe34bd0a4701b11a4dba7ae384f2d0e33f185656107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6565dc4c35c618a239bdc2be6dd45a9057e83573c92c3fc816350eb014c822c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.782561 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.800682 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.818228 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.818572 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.818757 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.818983 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.819174 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:47Z","lastTransitionTime":"2026-01-29T16:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.832468 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d5d7851e659633c2e0da4975fa0e9e36d8d1f1b9d725661500f818f6268bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5156bbcc6abbf40a22215f2642e81fed55f603d494e294703866350469208075\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"message\\\":\\\"able:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:10:27.532704 6150 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-c9jhc\\\\nI0129 16:10:27.533048 6150 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 16:10:27.533063 6150 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fail\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d5d7851e659633c2e0da4975fa0e9e36d8d1f1b9d725661500f818f6268bcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:10:47Z\\\",\\\"message\\\":\\\"ing reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:10:47.314985 6375 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:10:47.315336 6375 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:10:47.315637 6375 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:10:47.315679 6375 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:10:47.315724 6375 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:10:47.315752 6375 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 16:10:47.315774 6375 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 16:10:47.315793 6375 factory.go:656] Stopping watch factory\\\\nI0129 16:10:47.315813 6375 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:10:47.315849 6375 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:10:47.315864 6375 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:10:47.315882 6375 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:10:47.315889 6375 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.850837 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2932c3bd-04c7-4494-8d43-03c4524a353f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78109e9e498f3d640934a2c45faf27133d32ece9e35e42ed48ddba720fa7a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc1d7b5f32fe8de4bfd5ba555838989293be538217424c86ea5cedadb8295f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tg8sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.868104 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.889001 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.913036 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ec9086c2f128a0ee5f564692cf28086688a45cf9d6b328c33446ad8c6f2f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.922525 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.922576 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.922588 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.922608 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.922620 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:47Z","lastTransitionTime":"2026-01-29T16:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.932674 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:47 crc kubenswrapper[4714]: I0129 16:10:47.951173 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2w92b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"791456e8-8d95-4cdb-8fd1-d06a7586b328\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2w92b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.025421 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.025493 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.025512 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.025562 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.025581 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:48Z","lastTransitionTime":"2026-01-29T16:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.128554 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.128669 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.128694 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.128725 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.128790 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:48Z","lastTransitionTime":"2026-01-29T16:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.159129 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 07:17:41.242893521 +0000 UTC Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.183602 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:10:48 crc kubenswrapper[4714]: E0129 16:10:48.183798 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.232354 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.232403 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.232420 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.232442 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.232461 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:48Z","lastTransitionTime":"2026-01-29T16:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.335894 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.336012 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.336037 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.336068 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.336088 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:48Z","lastTransitionTime":"2026-01-29T16:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.439142 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.439207 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.439227 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.439252 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.439269 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:48Z","lastTransitionTime":"2026-01-29T16:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.541806 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.541872 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.541897 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.541927 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.541983 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:48Z","lastTransitionTime":"2026-01-29T16:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.568193 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sbnkt_04b20f02-6c1e-4082-8233-8f06bda63195/ovnkube-controller/2.log" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.572520 4714 scope.go:117] "RemoveContainer" containerID="98d5d7851e659633c2e0da4975fa0e9e36d8d1f1b9d725661500f818f6268bcc" Jan 29 16:10:48 crc kubenswrapper[4714]: E0129 16:10:48.573035 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sbnkt_openshift-ovn-kubernetes(04b20f02-6c1e-4082-8233-8f06bda63195)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.589846 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.609235 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.623059 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.643566 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.645478 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.645540 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.645557 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.645582 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.645600 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:48Z","lastTransitionTime":"2026-01-29T16:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.669007 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.687814 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.709471 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.724780 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.742849 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89403a5-379d-4c3f-a87f-8d2ed63ab368\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f0bdc60ea5a5e188b2ed8894bb387084567b294c3a356fed01493d4bd6a7caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6068173079b1d5cf7de92fe34bd0a4701b11a4dba7ae384f2d0e33f185656107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6565dc4c35c618a239bdc2be6dd45a9057e83573c92c3fc816350eb014c822c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.748217 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.748269 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.748283 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.748326 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.748338 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:48Z","lastTransitionTime":"2026-01-29T16:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.762172 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.777536 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.817850 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d5d7851e659633c2e0da4975fa0e9e36d8d1f1b9d725661500f818f6268bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d5d7851e659633c2e0da4975fa0e9e36d8d1f1b9d725661500f818f6268bcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:10:47Z\\\",\\\"message\\\":\\\"ing reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:10:47.314985 6375 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:10:47.315336 6375 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:10:47.315637 6375 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:10:47.315679 6375 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:10:47.315724 6375 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:10:47.315752 6375 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 16:10:47.315774 6375 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 16:10:47.315793 6375 factory.go:656] Stopping watch factory\\\\nI0129 16:10:47.315813 6375 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:10:47.315849 6375 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:10:47.315864 6375 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:10:47.315882 6375 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:10:47.315889 6375 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sbnkt_openshift-ovn-kubernetes(04b20f02-6c1e-4082-8233-8f06bda63195)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.836646 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2932c3bd-04c7-4494-8d43-03c4524a353f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78109e9e498f3d640934a2c45faf27133d32ece9e35e42ed48ddba720fa7a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc1d7b5f32fe8de4bfd5ba555838989293be538217424c86ea5cedadb8295f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tg8sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.850874 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.850918 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.850949 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.850969 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.850982 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:48Z","lastTransitionTime":"2026-01-29T16:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.855895 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.868784 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.880978 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ec9086c2f128a0ee5f564692cf28086688a45cf9d6b328c33446ad8c6f2f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.889443 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.900797 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2w92b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"791456e8-8d95-4cdb-8fd1-d06a7586b328\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2w92b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.953951 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.953983 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.953994 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.954008 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:48 crc kubenswrapper[4714]: I0129 16:10:48.954017 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:48Z","lastTransitionTime":"2026-01-29T16:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.056811 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.056872 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.056891 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.056912 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.056956 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:49Z","lastTransitionTime":"2026-01-29T16:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.159263 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 17:12:31.630409727 +0000 UTC Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.159714 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.159756 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.159766 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.159801 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.159812 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:49Z","lastTransitionTime":"2026-01-29T16:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.183698 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.183747 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.183896 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:49 crc kubenswrapper[4714]: E0129 16:10:49.184049 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:10:49 crc kubenswrapper[4714]: E0129 16:10:49.184235 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:10:49 crc kubenswrapper[4714]: E0129 16:10:49.184404 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.262711 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.262774 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.262791 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.262817 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.262835 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:49Z","lastTransitionTime":"2026-01-29T16:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.366053 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.366123 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.366141 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.366170 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.366187 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:49Z","lastTransitionTime":"2026-01-29T16:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.469682 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.469745 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.469771 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.469802 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.469828 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:49Z","lastTransitionTime":"2026-01-29T16:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.572404 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.572579 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.572653 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.572725 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.572754 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:49Z","lastTransitionTime":"2026-01-29T16:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.676227 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.676306 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.676324 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.676355 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.676375 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:49Z","lastTransitionTime":"2026-01-29T16:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.780025 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.780133 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.780152 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.780178 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.780194 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:49Z","lastTransitionTime":"2026-01-29T16:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.882891 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.882993 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.883012 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.883038 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.883057 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:49Z","lastTransitionTime":"2026-01-29T16:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.986476 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.986564 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.986582 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.986612 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:49 crc kubenswrapper[4714]: I0129 16:10:49.986634 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:49Z","lastTransitionTime":"2026-01-29T16:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.090041 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.090136 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.090155 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.090185 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.090205 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:50Z","lastTransitionTime":"2026-01-29T16:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.160146 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 07:14:44.941441217 +0000 UTC Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.183760 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:10:50 crc kubenswrapper[4714]: E0129 16:10:50.184262 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.195185 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.195265 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.195285 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.195313 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.195333 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:50Z","lastTransitionTime":"2026-01-29T16:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.298077 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.298132 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.298148 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.298172 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.298187 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:50Z","lastTransitionTime":"2026-01-29T16:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.401394 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.401467 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.401488 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.401519 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.401543 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:50Z","lastTransitionTime":"2026-01-29T16:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.505064 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.505118 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.505136 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.505161 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.505179 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:50Z","lastTransitionTime":"2026-01-29T16:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.608603 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.608676 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.608698 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.608729 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.608754 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:50Z","lastTransitionTime":"2026-01-29T16:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.711825 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.711877 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.711890 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.711910 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.711922 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:50Z","lastTransitionTime":"2026-01-29T16:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.815233 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.815368 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.815392 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.815415 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.815433 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:50Z","lastTransitionTime":"2026-01-29T16:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.919163 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.919257 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.919366 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.919392 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:50 crc kubenswrapper[4714]: I0129 16:10:50.919410 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:50Z","lastTransitionTime":"2026-01-29T16:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.021909 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.022026 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.022044 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.022068 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.022085 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:51Z","lastTransitionTime":"2026-01-29T16:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.125158 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.125269 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.125295 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.125325 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.125346 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:51Z","lastTransitionTime":"2026-01-29T16:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.161058 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 19:46:19.839056109 +0000 UTC Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.184001 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.184082 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.184021 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:51 crc kubenswrapper[4714]: E0129 16:10:51.184224 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:10:51 crc kubenswrapper[4714]: E0129 16:10:51.184393 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:10:51 crc kubenswrapper[4714]: E0129 16:10:51.184564 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.229139 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.229196 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.229218 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.229243 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.229261 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:51Z","lastTransitionTime":"2026-01-29T16:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.332180 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.332248 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.332265 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.332290 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.332309 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:51Z","lastTransitionTime":"2026-01-29T16:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.435428 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.435494 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.435514 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.435545 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.435567 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:51Z","lastTransitionTime":"2026-01-29T16:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.538164 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.538211 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.538228 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.538251 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.538268 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:51Z","lastTransitionTime":"2026-01-29T16:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.604645 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.604724 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.604749 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.604778 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.604801 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:51Z","lastTransitionTime":"2026-01-29T16:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:51 crc kubenswrapper[4714]: E0129 16:10:51.621567 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:51Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.626877 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.626925 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.627001 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.627023 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.627040 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:51Z","lastTransitionTime":"2026-01-29T16:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:51 crc kubenswrapper[4714]: E0129 16:10:51.647638 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:51Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.652208 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.652238 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.652249 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.652264 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.652276 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:51Z","lastTransitionTime":"2026-01-29T16:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:51 crc kubenswrapper[4714]: E0129 16:10:51.669640 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:51Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.674189 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.674238 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.674255 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.674277 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.674294 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:51Z","lastTransitionTime":"2026-01-29T16:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:51 crc kubenswrapper[4714]: E0129 16:10:51.694611 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:51Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.699675 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.699756 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.699774 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.699810 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.699837 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:51Z","lastTransitionTime":"2026-01-29T16:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:51 crc kubenswrapper[4714]: E0129 16:10:51.719606 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:51Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:51 crc kubenswrapper[4714]: E0129 16:10:51.719831 4714 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.722063 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.722122 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.722148 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.722176 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.722197 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:51Z","lastTransitionTime":"2026-01-29T16:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.825069 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.825145 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.825167 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.825195 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.825217 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:51Z","lastTransitionTime":"2026-01-29T16:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.928237 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.928329 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.928354 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.928387 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:51 crc kubenswrapper[4714]: I0129 16:10:51.928438 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:51Z","lastTransitionTime":"2026-01-29T16:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.031910 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.032003 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.032023 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.032049 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.032069 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:52Z","lastTransitionTime":"2026-01-29T16:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.135025 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.135454 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.135627 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.135798 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.136000 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:52Z","lastTransitionTime":"2026-01-29T16:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.161994 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 04:38:12.991351199 +0000 UTC Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.183598 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:10:52 crc kubenswrapper[4714]: E0129 16:10:52.184188 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.239681 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.239732 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.239749 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.239773 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.239792 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:52Z","lastTransitionTime":"2026-01-29T16:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.342862 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.342989 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.343016 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.343047 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.343072 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:52Z","lastTransitionTime":"2026-01-29T16:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.445443 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.445533 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.445617 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.445736 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.445784 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:52Z","lastTransitionTime":"2026-01-29T16:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.548911 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.548999 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.549014 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.549043 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.549059 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:52Z","lastTransitionTime":"2026-01-29T16:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.652311 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.652400 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.652436 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.652470 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.652493 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:52Z","lastTransitionTime":"2026-01-29T16:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.755551 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.755638 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.755658 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.755684 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.755702 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:52Z","lastTransitionTime":"2026-01-29T16:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.858765 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.858827 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.858844 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.858868 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.858887 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:52Z","lastTransitionTime":"2026-01-29T16:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.961144 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.961202 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.961213 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.961228 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:52 crc kubenswrapper[4714]: I0129 16:10:52.961239 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:52Z","lastTransitionTime":"2026-01-29T16:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.063849 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.063921 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.063974 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.064009 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.064075 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:53Z","lastTransitionTime":"2026-01-29T16:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.163502 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 22:47:23.895518493 +0000 UTC Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.166672 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.166735 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.166755 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.166812 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.166829 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:53Z","lastTransitionTime":"2026-01-29T16:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.183299 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.183392 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.183448 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:53 crc kubenswrapper[4714]: E0129 16:10:53.183608 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:10:53 crc kubenswrapper[4714]: E0129 16:10:53.183753 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:10:53 crc kubenswrapper[4714]: E0129 16:10:53.183991 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.269650 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.269717 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.269734 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.269762 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.269785 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:53Z","lastTransitionTime":"2026-01-29T16:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.373121 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.373189 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.373206 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.373231 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.373249 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:53Z","lastTransitionTime":"2026-01-29T16:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.476400 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.476481 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.476505 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.476540 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.476562 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:53Z","lastTransitionTime":"2026-01-29T16:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.579300 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.579357 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.579372 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.579406 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.579418 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:53Z","lastTransitionTime":"2026-01-29T16:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.683311 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.683371 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.683383 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.683406 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.683421 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:53Z","lastTransitionTime":"2026-01-29T16:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.786211 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.786257 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.786269 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.786285 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.786299 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:53Z","lastTransitionTime":"2026-01-29T16:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.889034 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.889113 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.889141 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.889175 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.889200 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:53Z","lastTransitionTime":"2026-01-29T16:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.991071 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.991123 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.991132 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.991145 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:53 crc kubenswrapper[4714]: I0129 16:10:53.991154 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:53Z","lastTransitionTime":"2026-01-29T16:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.094838 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.094903 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.094912 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.094949 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.094962 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:54Z","lastTransitionTime":"2026-01-29T16:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.164503 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 15:16:39.879751955 +0000 UTC Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.183624 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:10:54 crc kubenswrapper[4714]: E0129 16:10:54.183771 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.197789 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.198127 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.198265 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.198398 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.198498 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:54Z","lastTransitionTime":"2026-01-29T16:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.199703 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2w92b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"791456e8-8d95-4cdb-8fd1-d06a7586b328\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2w92b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.220005 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.237211 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.255138 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ec9086c2f128a0ee5f564692cf28086688a45cf9d6b328c33446ad8c6f2f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.272657 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.296679 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.301170 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.301231 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.301249 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.301279 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.301299 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:54Z","lastTransitionTime":"2026-01-29T16:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.311072 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.323881 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.339049 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.353521 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.378773 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.395211 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.403451 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.403519 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.403534 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.403571 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.403607 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:54Z","lastTransitionTime":"2026-01-29T16:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.417080 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.436662 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2932c3bd-04c7-4494-8d43-03c4524a353f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78109e9e498f3d640934a2c45faf27133d32ece9e35e42ed48ddba720fa7a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc1d7b5f32fe8de4bfd5ba555838989293be538217424c86ea5cedadb8295f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tg8sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.456248 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89403a5-379d-4c3f-a87f-8d2ed63ab368\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f0bdc60ea5a5e188b2ed8894bb387084567b294c3a356fed01493d4bd6a7caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6068173079b1d5cf7de92fe34bd0a4701b11a4dba7ae384f2d0e33f185656107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6565dc4c35c618a239bdc2be6dd45a9057e83573c92c3fc816350eb014c822c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.475082 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.490503 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.505973 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.506054 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.506080 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.506124 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.506149 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:54Z","lastTransitionTime":"2026-01-29T16:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.518631 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d5d7851e659633c2e0da4975fa0e9e36d8d1f1b9d725661500f818f6268bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d5d7851e659633c2e0da4975fa0e9e36d8d1f1b9d725661500f818f6268bcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:10:47Z\\\",\\\"message\\\":\\\"ing reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:10:47.314985 6375 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:10:47.315336 6375 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:10:47.315637 6375 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:10:47.315679 6375 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:10:47.315724 6375 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:10:47.315752 6375 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 16:10:47.315774 6375 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 16:10:47.315793 6375 factory.go:656] Stopping watch factory\\\\nI0129 16:10:47.315813 6375 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:10:47.315849 6375 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:10:47.315864 6375 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:10:47.315882 6375 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:10:47.315889 6375 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sbnkt_openshift-ovn-kubernetes(04b20f02-6c1e-4082-8233-8f06bda63195)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:10:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.609395 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.609454 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.609472 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.609494 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.609892 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:54Z","lastTransitionTime":"2026-01-29T16:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.712656 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.712886 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.712896 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.712911 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.712922 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:54Z","lastTransitionTime":"2026-01-29T16:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.816450 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.816820 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.817004 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.817163 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.817305 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:54Z","lastTransitionTime":"2026-01-29T16:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.920627 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.920695 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.920718 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.920745 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:54 crc kubenswrapper[4714]: I0129 16:10:54.920766 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:54Z","lastTransitionTime":"2026-01-29T16:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.024299 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.024370 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.024404 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.024425 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.024442 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:55Z","lastTransitionTime":"2026-01-29T16:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.126240 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.126278 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.126290 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.126305 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.126318 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:55Z","lastTransitionTime":"2026-01-29T16:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.164850 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 02:35:30.42743811 +0000 UTC Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.183213 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:55 crc kubenswrapper[4714]: E0129 16:10:55.183321 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.183374 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:55 crc kubenswrapper[4714]: E0129 16:10:55.183551 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.183570 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:55 crc kubenswrapper[4714]: E0129 16:10:55.183717 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.229061 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.229106 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.229122 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.229147 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.229164 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:55Z","lastTransitionTime":"2026-01-29T16:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.332310 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.332710 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.332866 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.333063 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.333238 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:55Z","lastTransitionTime":"2026-01-29T16:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.435731 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.435779 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.435790 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.435808 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.435822 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:55Z","lastTransitionTime":"2026-01-29T16:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.539473 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.539543 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.539561 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.539586 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.539604 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:55Z","lastTransitionTime":"2026-01-29T16:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.642810 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.643129 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.643209 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.643291 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.643399 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:55Z","lastTransitionTime":"2026-01-29T16:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.746227 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.746289 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.746306 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.746327 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.746345 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:55Z","lastTransitionTime":"2026-01-29T16:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.849433 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.849502 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.849527 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.849556 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.849580 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:55Z","lastTransitionTime":"2026-01-29T16:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.953329 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.953370 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.953382 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.953402 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:55 crc kubenswrapper[4714]: I0129 16:10:55.953414 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:55Z","lastTransitionTime":"2026-01-29T16:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.057638 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.057681 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.057695 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.057716 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.057729 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:56Z","lastTransitionTime":"2026-01-29T16:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.160686 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.160739 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.160757 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.160786 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.160804 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:56Z","lastTransitionTime":"2026-01-29T16:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.263266 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.263310 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.263326 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.263348 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.263364 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:56Z","lastTransitionTime":"2026-01-29T16:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.366974 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.367034 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.367053 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.367074 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.367091 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:56Z","lastTransitionTime":"2026-01-29T16:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.469825 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.469872 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.469889 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.469913 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.469959 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:56Z","lastTransitionTime":"2026-01-29T16:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.572362 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.572415 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.572432 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.572451 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.572467 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:56Z","lastTransitionTime":"2026-01-29T16:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.675038 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.675071 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.675083 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.675100 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.675112 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:56Z","lastTransitionTime":"2026-01-29T16:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.779016 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.779053 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.779062 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.779079 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.779089 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:56Z","lastTransitionTime":"2026-01-29T16:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.845475 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 08:33:53.939539328 +0000 UTC Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.846355 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.846465 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.846516 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:56 crc kubenswrapper[4714]: E0129 16:10:56.846508 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.846465 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:56 crc kubenswrapper[4714]: E0129 16:10:56.846664 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:10:56 crc kubenswrapper[4714]: E0129 16:10:56.846792 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:10:56 crc kubenswrapper[4714]: E0129 16:10:56.846903 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.882358 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.882425 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.882441 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.882468 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.882486 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:56Z","lastTransitionTime":"2026-01-29T16:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.985733 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.985815 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.985834 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.985864 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:56 crc kubenswrapper[4714]: I0129 16:10:56.985885 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:56Z","lastTransitionTime":"2026-01-29T16:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.088991 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.089048 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.089067 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.089094 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.089114 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:57Z","lastTransitionTime":"2026-01-29T16:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.192730 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.192802 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.192823 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.192848 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.192866 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:57Z","lastTransitionTime":"2026-01-29T16:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.296034 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.296098 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.296118 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.296147 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.296165 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:57Z","lastTransitionTime":"2026-01-29T16:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.399433 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.399508 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.399528 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.399553 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.399571 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:57Z","lastTransitionTime":"2026-01-29T16:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.502722 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.502778 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.502794 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.502816 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.502833 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:57Z","lastTransitionTime":"2026-01-29T16:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.604672 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.604718 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.604730 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.604749 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.604763 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:57Z","lastTransitionTime":"2026-01-29T16:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.707482 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.707553 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.707573 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.707598 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.707616 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:57Z","lastTransitionTime":"2026-01-29T16:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.811345 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.811415 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.811433 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.811459 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.811481 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:57Z","lastTransitionTime":"2026-01-29T16:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.845664 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 06:35:04.24250712 +0000 UTC Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.914798 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.914843 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.914854 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.914871 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:57 crc kubenswrapper[4714]: I0129 16:10:57.914883 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:57Z","lastTransitionTime":"2026-01-29T16:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.018175 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.018238 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.018256 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.018280 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.018298 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:58Z","lastTransitionTime":"2026-01-29T16:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.121524 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.121605 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.121624 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.121649 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.121665 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:58Z","lastTransitionTime":"2026-01-29T16:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.184108 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.184186 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.184249 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:10:58 crc kubenswrapper[4714]: E0129 16:10:58.184386 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:10:58 crc kubenswrapper[4714]: E0129 16:10:58.184567 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:10:58 crc kubenswrapper[4714]: E0129 16:10:58.184665 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.224511 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.224552 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.224565 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.224605 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.224618 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:58Z","lastTransitionTime":"2026-01-29T16:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.328255 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.328295 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.328331 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.328348 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.328361 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:58Z","lastTransitionTime":"2026-01-29T16:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.431864 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.432157 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.432215 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.432240 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.432601 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:58Z","lastTransitionTime":"2026-01-29T16:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.535553 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.535599 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.535615 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.535638 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.535655 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:58Z","lastTransitionTime":"2026-01-29T16:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.638494 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.638559 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.638577 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.638601 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.638619 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:58Z","lastTransitionTime":"2026-01-29T16:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.741649 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.741707 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.741724 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.741749 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.741767 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:58Z","lastTransitionTime":"2026-01-29T16:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.845680 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.845734 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.845753 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.845777 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.845796 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:58Z","lastTransitionTime":"2026-01-29T16:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.845917 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 14:08:11.895414663 +0000 UTC Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.949298 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.949379 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.949416 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.949438 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:58 crc kubenswrapper[4714]: I0129 16:10:58.949449 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:58Z","lastTransitionTime":"2026-01-29T16:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.053463 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.053524 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.053541 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.053572 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.053590 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:59Z","lastTransitionTime":"2026-01-29T16:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.156979 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.157030 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.157048 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.157072 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.157103 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:59Z","lastTransitionTime":"2026-01-29T16:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.183875 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:10:59 crc kubenswrapper[4714]: E0129 16:10:59.184039 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.260382 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.260440 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.260460 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.260487 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.260506 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:59Z","lastTransitionTime":"2026-01-29T16:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.363367 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.363411 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.363423 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.363441 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.363454 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:59Z","lastTransitionTime":"2026-01-29T16:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.466706 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.466777 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.466796 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.466828 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.466846 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:59Z","lastTransitionTime":"2026-01-29T16:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.569848 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.569911 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.570192 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.570235 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.570256 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:59Z","lastTransitionTime":"2026-01-29T16:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.672788 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.672830 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.672843 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.672862 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.672874 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:59Z","lastTransitionTime":"2026-01-29T16:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.776531 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.776596 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.776615 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.776640 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.776658 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:59Z","lastTransitionTime":"2026-01-29T16:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.846229 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 16:34:18.566218485 +0000 UTC Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.888530 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.888575 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.888590 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.888612 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.888627 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:59Z","lastTransitionTime":"2026-01-29T16:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.992127 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.992201 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.992221 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.992247 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:10:59 crc kubenswrapper[4714]: I0129 16:10:59.992269 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:10:59Z","lastTransitionTime":"2026-01-29T16:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.095173 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.095235 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.095252 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.095280 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.095297 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:00Z","lastTransitionTime":"2026-01-29T16:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.183369 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.183414 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:00 crc kubenswrapper[4714]: E0129 16:11:00.183544 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.183564 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:00 crc kubenswrapper[4714]: E0129 16:11:00.183660 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:00 crc kubenswrapper[4714]: E0129 16:11:00.183736 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.201112 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.201172 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.201195 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.201220 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.201237 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:00Z","lastTransitionTime":"2026-01-29T16:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.303634 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.303667 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.303675 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.303688 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.303697 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:00Z","lastTransitionTime":"2026-01-29T16:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.406414 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.406468 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.406486 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.406510 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.406525 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:00Z","lastTransitionTime":"2026-01-29T16:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.522072 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.522134 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.522155 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.522180 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.522198 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:00Z","lastTransitionTime":"2026-01-29T16:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.624284 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.624323 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.624333 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.624348 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.624357 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:00Z","lastTransitionTime":"2026-01-29T16:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.726878 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.726969 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.726992 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.727023 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.727048 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:00Z","lastTransitionTime":"2026-01-29T16:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.830247 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.830301 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.830316 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.830340 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.830353 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:00Z","lastTransitionTime":"2026-01-29T16:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.846719 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 07:15:56.643904055 +0000 UTC Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.932903 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.932953 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.932965 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.932981 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:00 crc kubenswrapper[4714]: I0129 16:11:00.932992 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:00Z","lastTransitionTime":"2026-01-29T16:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.035895 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.035980 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.036000 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.036025 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.036042 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:01Z","lastTransitionTime":"2026-01-29T16:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.139651 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.139724 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.139751 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.139784 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.139806 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:01Z","lastTransitionTime":"2026-01-29T16:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.183320 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:01 crc kubenswrapper[4714]: E0129 16:11:01.183699 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.183892 4714 scope.go:117] "RemoveContainer" containerID="98d5d7851e659633c2e0da4975fa0e9e36d8d1f1b9d725661500f818f6268bcc" Jan 29 16:11:01 crc kubenswrapper[4714]: E0129 16:11:01.184052 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sbnkt_openshift-ovn-kubernetes(04b20f02-6c1e-4082-8233-8f06bda63195)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.242110 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.242142 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.242150 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.242164 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.242173 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:01Z","lastTransitionTime":"2026-01-29T16:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.345556 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.345600 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.345608 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.345625 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.345634 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:01Z","lastTransitionTime":"2026-01-29T16:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.447781 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.447831 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.447843 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.447864 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.447877 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:01Z","lastTransitionTime":"2026-01-29T16:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.550495 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.550553 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.550565 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.550584 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.550602 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:01Z","lastTransitionTime":"2026-01-29T16:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.654021 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.654064 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.654077 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.654094 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.654105 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:01Z","lastTransitionTime":"2026-01-29T16:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.756209 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.756247 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.756272 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.756287 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.756297 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:01Z","lastTransitionTime":"2026-01-29T16:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.847573 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 19:17:37.83068583 +0000 UTC Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.858339 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.858410 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.858418 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.858435 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.858444 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:01Z","lastTransitionTime":"2026-01-29T16:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.961636 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.961693 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.961709 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.961731 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.961744 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:01Z","lastTransitionTime":"2026-01-29T16:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.963024 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.963064 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.963081 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.963098 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.963112 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:01Z","lastTransitionTime":"2026-01-29T16:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:01 crc kubenswrapper[4714]: E0129 16:11:01.977630 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.980592 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.980628 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.980637 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.980654 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:01 crc kubenswrapper[4714]: I0129 16:11:01.980667 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:01Z","lastTransitionTime":"2026-01-29T16:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:01 crc kubenswrapper[4714]: E0129 16:11:01.997581 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.001971 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.002032 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.002049 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.002105 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.002121 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:02Z","lastTransitionTime":"2026-01-29T16:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.008646 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/791456e8-8d95-4cdb-8fd1-d06a7586b328-metrics-certs\") pod \"network-metrics-daemon-2w92b\" (UID: \"791456e8-8d95-4cdb-8fd1-d06a7586b328\") " pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:02 crc kubenswrapper[4714]: E0129 16:11:02.008794 4714 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:11:02 crc kubenswrapper[4714]: E0129 16:11:02.008856 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/791456e8-8d95-4cdb-8fd1-d06a7586b328-metrics-certs podName:791456e8-8d95-4cdb-8fd1-d06a7586b328 nodeName:}" failed. No retries permitted until 2026-01-29 16:11:34.008840376 +0000 UTC m=+100.529341506 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/791456e8-8d95-4cdb-8fd1-d06a7586b328-metrics-certs") pod "network-metrics-daemon-2w92b" (UID: "791456e8-8d95-4cdb-8fd1-d06a7586b328") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:11:02 crc kubenswrapper[4714]: E0129 16:11:02.018794 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.022203 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.022252 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.022265 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.022284 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.022295 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:02Z","lastTransitionTime":"2026-01-29T16:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:02 crc kubenswrapper[4714]: E0129 16:11:02.036723 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.039804 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.039841 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.039853 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.039875 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.039887 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:02Z","lastTransitionTime":"2026-01-29T16:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:02 crc kubenswrapper[4714]: E0129 16:11:02.054736 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:02 crc kubenswrapper[4714]: E0129 16:11:02.054916 4714 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.064041 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.064085 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.064095 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.064115 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.064132 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:02Z","lastTransitionTime":"2026-01-29T16:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.168405 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.168463 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.168482 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.168503 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.168522 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:02Z","lastTransitionTime":"2026-01-29T16:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.183989 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.184031 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.183999 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:02 crc kubenswrapper[4714]: E0129 16:11:02.184142 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:02 crc kubenswrapper[4714]: E0129 16:11:02.184356 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:02 crc kubenswrapper[4714]: E0129 16:11:02.184421 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.271244 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.271330 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.271348 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.271367 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.271383 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:02Z","lastTransitionTime":"2026-01-29T16:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.373297 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.373335 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.373352 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.373371 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.373387 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:02Z","lastTransitionTime":"2026-01-29T16:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.475905 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.475988 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.476002 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.476021 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.476036 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:02Z","lastTransitionTime":"2026-01-29T16:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.579949 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.580002 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.580015 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.580035 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.580046 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:02Z","lastTransitionTime":"2026-01-29T16:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.683697 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.683745 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.683756 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.683781 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.683795 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:02Z","lastTransitionTime":"2026-01-29T16:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.787644 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.787696 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.787715 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.787740 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.787760 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:02Z","lastTransitionTime":"2026-01-29T16:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.847982 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 04:32:47.95244453 +0000 UTC Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.890295 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.890348 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.890365 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.890388 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.890407 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:02Z","lastTransitionTime":"2026-01-29T16:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.993022 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.993059 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.993069 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.993083 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:02 crc kubenswrapper[4714]: I0129 16:11:02.993092 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:02Z","lastTransitionTime":"2026-01-29T16:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.095545 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.095592 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.095606 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.095623 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.095676 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:03Z","lastTransitionTime":"2026-01-29T16:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.183660 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:03 crc kubenswrapper[4714]: E0129 16:11:03.183793 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.198820 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.198853 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.198863 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.198876 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.198885 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:03Z","lastTransitionTime":"2026-01-29T16:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.301869 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.301909 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.301922 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.301968 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.301983 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:03Z","lastTransitionTime":"2026-01-29T16:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.405161 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.405198 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.405208 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.405223 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.405235 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:03Z","lastTransitionTime":"2026-01-29T16:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.508391 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.508448 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.508461 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.508484 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.508503 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:03Z","lastTransitionTime":"2026-01-29T16:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.611260 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.611303 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.611315 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.611333 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.611345 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:03Z","lastTransitionTime":"2026-01-29T16:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.714714 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.714811 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.714832 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.714869 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.714890 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:03Z","lastTransitionTime":"2026-01-29T16:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.818284 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.818383 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.818569 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.818594 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.818608 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:03Z","lastTransitionTime":"2026-01-29T16:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.848870 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 01:27:13.90868659 +0000 UTC Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.922531 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.922578 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.922592 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.922608 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:03 crc kubenswrapper[4714]: I0129 16:11:03.922620 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:03Z","lastTransitionTime":"2026-01-29T16:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.025607 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.025651 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.025663 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.025682 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.025695 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:04Z","lastTransitionTime":"2026-01-29T16:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.129404 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.129462 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.129478 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.129505 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.129522 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:04Z","lastTransitionTime":"2026-01-29T16:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.183587 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:04 crc kubenswrapper[4714]: E0129 16:11:04.183733 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.183614 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:04 crc kubenswrapper[4714]: E0129 16:11:04.183808 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.183595 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:04 crc kubenswrapper[4714]: E0129 16:11:04.184097 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.201438 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.218332 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ec9086c2f128a0ee5f564692cf28086688a45cf9d6b328c33446ad8c6f2f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.232136 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.232214 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.232248 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.232273 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.232287 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:04Z","lastTransitionTime":"2026-01-29T16:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.232685 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.248360 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2w92b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"791456e8-8d95-4cdb-8fd1-d06a7586b328\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2w92b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.269344 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.284746 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.301125 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.313741 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.330101 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.335139 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.335193 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.335204 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.335227 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.335239 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:04Z","lastTransitionTime":"2026-01-29T16:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.364142 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.385351 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.406332 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.423619 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.438339 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.438397 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.438414 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.438438 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.438454 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:04Z","lastTransitionTime":"2026-01-29T16:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.442091 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.459347 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.479344 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d5d7851e659633c2e0da4975fa0e9e36d8d1f1b9d725661500f818f6268bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d5d7851e659633c2e0da4975fa0e9e36d8d1f1b9d725661500f818f6268bcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:10:47Z\\\",\\\"message\\\":\\\"ing reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:10:47.314985 6375 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:10:47.315336 6375 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:10:47.315637 6375 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:10:47.315679 6375 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:10:47.315724 6375 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:10:47.315752 6375 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 16:10:47.315774 6375 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 16:10:47.315793 6375 factory.go:656] Stopping watch factory\\\\nI0129 16:10:47.315813 6375 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:10:47.315849 6375 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:10:47.315864 6375 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:10:47.315882 6375 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:10:47.315889 6375 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sbnkt_openshift-ovn-kubernetes(04b20f02-6c1e-4082-8233-8f06bda63195)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.492557 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2932c3bd-04c7-4494-8d43-03c4524a353f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78109e9e498f3d640934a2c45faf27133d32ece9e35e42ed48ddba720fa7a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc1d7b5f32fe8de4bfd5ba555838989293be538217424c86ea5cedadb8295f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tg8sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.506316 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89403a5-379d-4c3f-a87f-8d2ed63ab368\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f0bdc60ea5a5e188b2ed8894bb387084567b294c3a356fed01493d4bd6a7caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6068173079b1d5cf7de92fe34bd0a4701b11a4dba7ae384f2d0e33f185656107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6565dc4c35c618a239bdc2be6dd45a9057e83573c92c3fc816350eb014c822c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.540919 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.540986 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.540996 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.541017 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.541031 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:04Z","lastTransitionTime":"2026-01-29T16:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.629695 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b2ttm_89560008-8bdc-4640-af11-681d825e69d4/kube-multus/0.log" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.629777 4714 generic.go:334] "Generic (PLEG): container finished" podID="89560008-8bdc-4640-af11-681d825e69d4" containerID="a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a" exitCode=1 Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.629836 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b2ttm" event={"ID":"89560008-8bdc-4640-af11-681d825e69d4","Type":"ContainerDied","Data":"a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a"} Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.630523 4714 scope.go:117] "RemoveContainer" containerID="a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.642747 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.642790 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.642802 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.642818 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.642828 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:04Z","lastTransitionTime":"2026-01-29T16:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.647137 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.664666 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.679569 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.691116 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:11:03Z\\\",\\\"message\\\":\\\"2026-01-29T16:10:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_27c11876-c5ae-4d35-b720-78ab09bfac0c\\\\n2026-01-29T16:10:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_27c11876-c5ae-4d35-b720-78ab09bfac0c to /host/opt/cni/bin/\\\\n2026-01-29T16:10:18Z [verbose] multus-daemon started\\\\n2026-01-29T16:10:18Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:11:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.717023 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.729919 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.743428 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.745506 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.745558 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.745572 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.745590 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.745602 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:04Z","lastTransitionTime":"2026-01-29T16:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.753701 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.764952 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89403a5-379d-4c3f-a87f-8d2ed63ab368\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f0bdc60ea5a5e188b2ed8894bb387084567b294c3a356fed01493d4bd6a7caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6068173079b1d5cf7de92fe34bd0a4701b11a4dba7ae384f2d0e33f185656107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6565dc4c35c618a239bdc2be6dd45a9057e83573c92c3fc816350eb014c822c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.776318 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.786862 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.809149 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d5d7851e659633c2e0da4975fa0e9e36d8d1f1b9d725661500f818f6268bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d5d7851e659633c2e0da4975fa0e9e36d8d1f1b9d725661500f818f6268bcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:10:47Z\\\",\\\"message\\\":\\\"ing reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:10:47.314985 6375 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:10:47.315336 6375 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:10:47.315637 6375 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:10:47.315679 6375 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:10:47.315724 6375 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:10:47.315752 6375 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 16:10:47.315774 6375 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 16:10:47.315793 6375 factory.go:656] Stopping watch factory\\\\nI0129 16:10:47.315813 6375 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:10:47.315849 6375 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:10:47.315864 6375 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:10:47.315882 6375 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:10:47.315889 6375 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sbnkt_openshift-ovn-kubernetes(04b20f02-6c1e-4082-8233-8f06bda63195)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.820639 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2932c3bd-04c7-4494-8d43-03c4524a353f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78109e9e498f3d640934a2c45faf27133d32ece9e35e42ed48ddba720fa7a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc1d7b5f32fe8de4bfd5ba555838989293be538217424c86ea5cedadb8295f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tg8sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.833138 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.847040 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.848447 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.848497 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.848513 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.848531 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.848544 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:04Z","lastTransitionTime":"2026-01-29T16:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.849046 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 08:13:09.765681762 +0000 UTC Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.863585 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ec9086c2f128a0ee5f564692cf28086688a45cf9d6b328c33446ad8c6f2f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.876481 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.889693 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2w92b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"791456e8-8d95-4cdb-8fd1-d06a7586b328\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2w92b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.951264 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.951309 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.951319 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.951343 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:04 crc kubenswrapper[4714]: I0129 16:11:04.951354 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:04Z","lastTransitionTime":"2026-01-29T16:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.054820 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.054873 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.054886 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.054907 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.054923 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:05Z","lastTransitionTime":"2026-01-29T16:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.157989 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.158035 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.158050 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.158071 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.158085 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:05Z","lastTransitionTime":"2026-01-29T16:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.183466 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:05 crc kubenswrapper[4714]: E0129 16:11:05.183630 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.261791 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.261857 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.261876 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.261900 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.261917 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:05Z","lastTransitionTime":"2026-01-29T16:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.365254 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.365329 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.365341 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.365365 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.365378 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:05Z","lastTransitionTime":"2026-01-29T16:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.468189 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.468272 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.468292 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.468318 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.468331 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:05Z","lastTransitionTime":"2026-01-29T16:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.571900 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.571953 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.571965 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.571979 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.571987 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:05Z","lastTransitionTime":"2026-01-29T16:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.636504 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b2ttm_89560008-8bdc-4640-af11-681d825e69d4/kube-multus/0.log" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.636571 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b2ttm" event={"ID":"89560008-8bdc-4640-af11-681d825e69d4","Type":"ContainerStarted","Data":"c747ed61a18e27d63630395860ce896242426b1ce46ea5f9d00534b808804a58"} Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.658923 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.675515 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.675566 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.675582 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.675603 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.675619 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:05Z","lastTransitionTime":"2026-01-29T16:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.685482 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.701056 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.719835 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.735190 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2932c3bd-04c7-4494-8d43-03c4524a353f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78109e9e498f3d640934a2c45faf27133d32ece9e35e42ed48ddba720fa7a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc1d7b5f32fe8de4bfd5ba555838989293be538217424c86ea5cedadb8295f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tg8sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.751766 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89403a5-379d-4c3f-a87f-8d2ed63ab368\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f0bdc60ea5a5e188b2ed8894bb387084567b294c3a356fed01493d4bd6a7caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6068173079b1d5cf7de92fe34bd0a4701b11a4dba7ae384f2d0e33f185656107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6565dc4c35c618a239bdc2be6dd45a9057e83573c92c3fc816350eb014c822c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.766982 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.778713 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.778764 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.778779 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.778798 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.778812 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:05Z","lastTransitionTime":"2026-01-29T16:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.782683 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.805899 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d5d7851e659633c2e0da4975fa0e9e36d8d1f1b9d725661500f818f6268bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d5d7851e659633c2e0da4975fa0e9e36d8d1f1b9d725661500f818f6268bcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:10:47Z\\\",\\\"message\\\":\\\"ing reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:10:47.314985 6375 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:10:47.315336 6375 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:10:47.315637 6375 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:10:47.315679 6375 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:10:47.315724 6375 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:10:47.315752 6375 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 16:10:47.315774 6375 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 16:10:47.315793 6375 factory.go:656] Stopping watch factory\\\\nI0129 16:10:47.315813 6375 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:10:47.315849 6375 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:10:47.315864 6375 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:10:47.315882 6375 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:10:47.315889 6375 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sbnkt_openshift-ovn-kubernetes(04b20f02-6c1e-4082-8233-8f06bda63195)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.821229 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2w92b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"791456e8-8d95-4cdb-8fd1-d06a7586b328\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2w92b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.834913 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.847030 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.849285 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 06:37:59.96683529 +0000 UTC Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.863585 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ec9086c2f128a0ee5f564692cf28086688a45cf9d6b328c33446ad8c6f2f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.877113 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.881427 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.881478 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.881489 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.881504 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.881513 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:05Z","lastTransitionTime":"2026-01-29T16:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.890687 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.906648 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.924037 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.941414 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c747ed61a18e27d63630395860ce896242426b1ce46ea5f9d00534b808804a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:11:03Z\\\",\\\"message\\\":\\\"2026-01-29T16:10:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_27c11876-c5ae-4d35-b720-78ab09bfac0c\\\\n2026-01-29T16:10:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_27c11876-c5ae-4d35-b720-78ab09bfac0c to /host/opt/cni/bin/\\\\n2026-01-29T16:10:18Z [verbose] multus-daemon started\\\\n2026-01-29T16:10:18Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:11:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.985077 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.985142 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.985159 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.985180 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:05 crc kubenswrapper[4714]: I0129 16:11:05.985198 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:05Z","lastTransitionTime":"2026-01-29T16:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.087830 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.087876 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.087888 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.087903 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.087915 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:06Z","lastTransitionTime":"2026-01-29T16:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.184063 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.184078 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.184101 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:06 crc kubenswrapper[4714]: E0129 16:11:06.184254 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:06 crc kubenswrapper[4714]: E0129 16:11:06.184334 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:06 crc kubenswrapper[4714]: E0129 16:11:06.184498 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.189797 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.189829 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.189838 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.189849 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.189859 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:06Z","lastTransitionTime":"2026-01-29T16:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.292088 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.292157 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.292170 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.292192 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.292204 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:06Z","lastTransitionTime":"2026-01-29T16:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.394764 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.394821 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.394841 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.394867 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.394884 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:06Z","lastTransitionTime":"2026-01-29T16:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.498081 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.498138 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.498150 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.498171 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.498185 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:06Z","lastTransitionTime":"2026-01-29T16:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.601802 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.601867 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.601886 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.601913 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.601958 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:06Z","lastTransitionTime":"2026-01-29T16:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.705395 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.705464 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.705475 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.705498 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.705512 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:06Z","lastTransitionTime":"2026-01-29T16:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.808776 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.808820 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.808830 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.808847 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.808859 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:06Z","lastTransitionTime":"2026-01-29T16:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.850368 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 12:30:46.501275408 +0000 UTC Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.911847 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.911918 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.911950 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.911970 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:06 crc kubenswrapper[4714]: I0129 16:11:06.911985 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:06Z","lastTransitionTime":"2026-01-29T16:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.014885 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.014953 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.014965 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.014985 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.014999 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:07Z","lastTransitionTime":"2026-01-29T16:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.117724 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.117806 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.117829 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.117860 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.117883 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:07Z","lastTransitionTime":"2026-01-29T16:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.183415 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:07 crc kubenswrapper[4714]: E0129 16:11:07.183604 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.196387 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.220315 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.220342 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.220352 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.220366 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.220375 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:07Z","lastTransitionTime":"2026-01-29T16:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.322343 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.322571 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.322648 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.322723 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.322784 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:07Z","lastTransitionTime":"2026-01-29T16:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.425284 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.425336 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.425348 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.425365 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.425378 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:07Z","lastTransitionTime":"2026-01-29T16:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.527263 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.527314 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.527324 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.527337 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.527347 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:07Z","lastTransitionTime":"2026-01-29T16:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.630553 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.630620 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.630637 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.630662 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.630682 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:07Z","lastTransitionTime":"2026-01-29T16:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.734363 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.734424 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.734439 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.734462 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.734479 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:07Z","lastTransitionTime":"2026-01-29T16:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.837422 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.837462 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.837471 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.837485 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.837496 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:07Z","lastTransitionTime":"2026-01-29T16:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.850665 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 06:03:55.323231324 +0000 UTC Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.941490 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.941535 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.941546 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.941560 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:07 crc kubenswrapper[4714]: I0129 16:11:07.941571 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:07Z","lastTransitionTime":"2026-01-29T16:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.043880 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.043921 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.043956 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.043972 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.043982 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:08Z","lastTransitionTime":"2026-01-29T16:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.148153 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.148200 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.148214 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.148240 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.148250 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:08Z","lastTransitionTime":"2026-01-29T16:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.183328 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.183380 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:08 crc kubenswrapper[4714]: E0129 16:11:08.183475 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.183325 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:08 crc kubenswrapper[4714]: E0129 16:11:08.183596 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:08 crc kubenswrapper[4714]: E0129 16:11:08.183729 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.251243 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.251337 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.251350 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.251373 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.251389 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:08Z","lastTransitionTime":"2026-01-29T16:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.355275 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.355337 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.355357 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.355382 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.355401 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:08Z","lastTransitionTime":"2026-01-29T16:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.459296 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.459372 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.459427 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.459471 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.459499 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:08Z","lastTransitionTime":"2026-01-29T16:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.563713 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.563777 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.563792 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.563818 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.563861 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:08Z","lastTransitionTime":"2026-01-29T16:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.667693 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.667749 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.667761 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.667781 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.667795 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:08Z","lastTransitionTime":"2026-01-29T16:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.770464 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.770518 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.770531 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.770552 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.770563 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:08Z","lastTransitionTime":"2026-01-29T16:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.850741 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 14:27:48.783025477 +0000 UTC Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.873497 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.873590 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.873609 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.873636 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.873654 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:08Z","lastTransitionTime":"2026-01-29T16:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.977084 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.977157 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.977176 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.977204 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:08 crc kubenswrapper[4714]: I0129 16:11:08.977228 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:08Z","lastTransitionTime":"2026-01-29T16:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.080810 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.080853 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.080864 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.080880 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.080890 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:09Z","lastTransitionTime":"2026-01-29T16:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.183072 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.183109 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.183119 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.183134 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.183145 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:09Z","lastTransitionTime":"2026-01-29T16:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.183196 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:09 crc kubenswrapper[4714]: E0129 16:11:09.183396 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.285420 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.285480 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.285497 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.285520 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.285725 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:09Z","lastTransitionTime":"2026-01-29T16:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.392699 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.392786 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.392799 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.392823 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.392839 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:09Z","lastTransitionTime":"2026-01-29T16:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.496339 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.496656 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.496919 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.497011 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.497100 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:09Z","lastTransitionTime":"2026-01-29T16:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.600186 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.600234 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.600280 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.600303 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.600321 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:09Z","lastTransitionTime":"2026-01-29T16:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.703226 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.703262 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.703271 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.703288 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.703298 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:09Z","lastTransitionTime":"2026-01-29T16:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.807511 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.807551 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.807560 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.807575 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.807584 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:09Z","lastTransitionTime":"2026-01-29T16:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.851348 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 09:24:39.95754176 +0000 UTC Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.910875 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.910917 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.910967 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.910983 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:09 crc kubenswrapper[4714]: I0129 16:11:09.910994 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:09Z","lastTransitionTime":"2026-01-29T16:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.013733 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.013829 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.013849 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.013883 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.013903 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:10Z","lastTransitionTime":"2026-01-29T16:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.117320 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.117439 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.117450 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.117472 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.117484 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:10Z","lastTransitionTime":"2026-01-29T16:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.184180 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.184211 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.184301 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:10 crc kubenswrapper[4714]: E0129 16:11:10.184427 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:10 crc kubenswrapper[4714]: E0129 16:11:10.184582 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:10 crc kubenswrapper[4714]: E0129 16:11:10.184659 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.219979 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.220041 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.220062 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.220090 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.220123 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:10Z","lastTransitionTime":"2026-01-29T16:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.323691 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.323746 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.323758 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.323777 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.323791 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:10Z","lastTransitionTime":"2026-01-29T16:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.425862 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.425920 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.425966 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.425991 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.426008 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:10Z","lastTransitionTime":"2026-01-29T16:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.528739 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.528807 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.528833 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.528865 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.528890 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:10Z","lastTransitionTime":"2026-01-29T16:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.632215 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.632298 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.632323 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.632354 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.632379 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:10Z","lastTransitionTime":"2026-01-29T16:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.736278 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.736324 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.736339 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.736357 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.736366 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:10Z","lastTransitionTime":"2026-01-29T16:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.839575 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.839637 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.839656 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.839680 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.839698 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:10Z","lastTransitionTime":"2026-01-29T16:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.851509 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 00:10:20.282900018 +0000 UTC Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.942819 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.942863 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.942877 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.942893 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:10 crc kubenswrapper[4714]: I0129 16:11:10.942905 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:10Z","lastTransitionTime":"2026-01-29T16:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.045706 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.045742 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.045751 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.045766 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.045776 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:11Z","lastTransitionTime":"2026-01-29T16:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.149165 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.149247 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.149269 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.149299 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.149321 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:11Z","lastTransitionTime":"2026-01-29T16:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.183463 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:11 crc kubenswrapper[4714]: E0129 16:11:11.183661 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.252548 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.252672 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.252691 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.252720 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.252738 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:11Z","lastTransitionTime":"2026-01-29T16:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.355701 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.355751 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.355766 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.355786 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.355801 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:11Z","lastTransitionTime":"2026-01-29T16:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.458891 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.458963 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.458977 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.458996 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.459011 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:11Z","lastTransitionTime":"2026-01-29T16:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.561663 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.561704 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.561713 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.561728 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.561737 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:11Z","lastTransitionTime":"2026-01-29T16:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.664896 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.664965 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.664978 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.664994 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.665005 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:11Z","lastTransitionTime":"2026-01-29T16:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.767936 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.768010 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.768022 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.768041 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.768053 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:11Z","lastTransitionTime":"2026-01-29T16:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.852179 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 17:20:32.040320565 +0000 UTC Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.870911 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.871058 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.871092 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.871123 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.871147 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:11Z","lastTransitionTime":"2026-01-29T16:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.974275 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.974358 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.974384 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.974414 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:11 crc kubenswrapper[4714]: I0129 16:11:11.974437 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:11Z","lastTransitionTime":"2026-01-29T16:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.078171 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.078253 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.078274 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.078302 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.078323 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:12Z","lastTransitionTime":"2026-01-29T16:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.180723 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.180770 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.180780 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.180794 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.180805 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:12Z","lastTransitionTime":"2026-01-29T16:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.183981 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.184063 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:12 crc kubenswrapper[4714]: E0129 16:11:12.184780 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.184803 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:12 crc kubenswrapper[4714]: E0129 16:11:12.185084 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:12 crc kubenswrapper[4714]: E0129 16:11:12.185205 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.283048 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.283433 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.283643 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.283876 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.284141 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:12Z","lastTransitionTime":"2026-01-29T16:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.387318 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.387356 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.387366 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.387382 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.387392 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:12Z","lastTransitionTime":"2026-01-29T16:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.455004 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.455429 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.455637 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.455864 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.456122 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:12Z","lastTransitionTime":"2026-01-29T16:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:12 crc kubenswrapper[4714]: E0129 16:11:12.470031 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.475083 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.475339 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.475550 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.475765 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.476002 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:12Z","lastTransitionTime":"2026-01-29T16:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:12 crc kubenswrapper[4714]: E0129 16:11:12.498082 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.503353 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.503409 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.503421 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.503441 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.503454 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:12Z","lastTransitionTime":"2026-01-29T16:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:12 crc kubenswrapper[4714]: E0129 16:11:12.523227 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.527736 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.527784 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.527795 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.527814 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.527826 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:12Z","lastTransitionTime":"2026-01-29T16:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:12 crc kubenswrapper[4714]: E0129 16:11:12.544892 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.549100 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.549273 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.549379 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.549483 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.549582 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:12Z","lastTransitionTime":"2026-01-29T16:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:12 crc kubenswrapper[4714]: E0129 16:11:12.567470 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:12 crc kubenswrapper[4714]: E0129 16:11:12.567628 4714 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.569468 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.569504 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.569516 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.569532 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.569547 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:12Z","lastTransitionTime":"2026-01-29T16:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.671641 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.671700 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.671713 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.671732 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.671746 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:12Z","lastTransitionTime":"2026-01-29T16:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.774501 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.774538 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.774547 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.774561 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.774570 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:12Z","lastTransitionTime":"2026-01-29T16:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.853385 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 05:34:25.726392431 +0000 UTC Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.877222 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.877258 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.877269 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.877283 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.877292 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:12Z","lastTransitionTime":"2026-01-29T16:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.980799 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.980860 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.980878 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.980903 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:12 crc kubenswrapper[4714]: I0129 16:11:12.980920 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:12Z","lastTransitionTime":"2026-01-29T16:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.084024 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.084068 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.084079 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.084095 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.084107 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:13Z","lastTransitionTime":"2026-01-29T16:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.184109 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:13 crc kubenswrapper[4714]: E0129 16:11:13.184283 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.186851 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.186885 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.186895 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.186910 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.186925 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:13Z","lastTransitionTime":"2026-01-29T16:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.290231 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.290302 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.290319 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.290346 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.290366 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:13Z","lastTransitionTime":"2026-01-29T16:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.392792 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.392837 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.392845 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.392860 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.392869 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:13Z","lastTransitionTime":"2026-01-29T16:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.496676 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.496733 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.496747 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.496767 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.496784 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:13Z","lastTransitionTime":"2026-01-29T16:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.601581 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.601623 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.601632 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.601648 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.601657 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:13Z","lastTransitionTime":"2026-01-29T16:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.706263 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.706413 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.706436 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.706499 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.706521 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:13Z","lastTransitionTime":"2026-01-29T16:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.810995 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.811077 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.811097 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.811128 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.811147 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:13Z","lastTransitionTime":"2026-01-29T16:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.853645 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 05:40:46.504149847 +0000 UTC Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.915107 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.915183 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.915202 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.915224 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:13 crc kubenswrapper[4714]: I0129 16:11:13.915237 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:13Z","lastTransitionTime":"2026-01-29T16:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.018380 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.018439 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.018459 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.018481 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.018495 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:14Z","lastTransitionTime":"2026-01-29T16:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.126305 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.126368 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.126379 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.126396 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.126420 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:14Z","lastTransitionTime":"2026-01-29T16:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.184200 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:14 crc kubenswrapper[4714]: E0129 16:11:14.184390 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.184443 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.184635 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:14 crc kubenswrapper[4714]: E0129 16:11:14.184802 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:14 crc kubenswrapper[4714]: E0129 16:11:14.184901 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.186759 4714 scope.go:117] "RemoveContainer" containerID="98d5d7851e659633c2e0da4975fa0e9e36d8d1f1b9d725661500f818f6268bcc" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.204871 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2932c3bd-04c7-4494-8d43-03c4524a353f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78109e9e498f3d640934a2c45faf27133d32ece9e35e42ed48ddba720fa7a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc1d7b5f32fe8de4bfd5ba555838989293be538217424c86ea5cedadb8295f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tg8sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.218347 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89403a5-379d-4c3f-a87f-8d2ed63ab368\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f0bdc60ea5a5e188b2ed8894bb387084567b294c3a356fed01493d4bd6a7caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6068173079b1d5cf7de92fe34bd0a4701b11a4dba7ae384f2d0e33f185656107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6565dc4c35c618a239bdc2be6dd45a9057e83573c92c3fc816350eb014c822c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.229120 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.229176 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.229189 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.229214 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.229228 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:14Z","lastTransitionTime":"2026-01-29T16:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.240538 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.257653 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.293800 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d5d7851e659633c2e0da4975fa0e9e36d8d1f1b9d725661500f818f6268bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d5d7851e659633c2e0da4975fa0e9e36d8d1f1b9d725661500f818f6268bcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:10:47Z\\\",\\\"message\\\":\\\"ing reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:10:47.314985 6375 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:10:47.315336 6375 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:10:47.315637 6375 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:10:47.315679 6375 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:10:47.315724 6375 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:10:47.315752 6375 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 16:10:47.315774 6375 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 16:10:47.315793 6375 factory.go:656] Stopping watch factory\\\\nI0129 16:10:47.315813 6375 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:10:47.315849 6375 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:10:47.315864 6375 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:10:47.315882 6375 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:10:47.315889 6375 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sbnkt_openshift-ovn-kubernetes(04b20f02-6c1e-4082-8233-8f06bda63195)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.309486 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2w92b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"791456e8-8d95-4cdb-8fd1-d06a7586b328\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2w92b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.327775 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.331844 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.331902 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.331926 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.332064 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.332172 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:14Z","lastTransitionTime":"2026-01-29T16:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.342580 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.358356 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ec9086c2f128a0ee5f564692cf28086688a45cf9d6b328c33446ad8c6f2f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.370583 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.391658 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.407447 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.419303 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.434620 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.434679 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.434698 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.434719 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.434731 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:14Z","lastTransitionTime":"2026-01-29T16:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.436876 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c747ed61a18e27d63630395860ce896242426b1ce46ea5f9d00534b808804a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:11:03Z\\\",\\\"message\\\":\\\"2026-01-29T16:10:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_27c11876-c5ae-4d35-b720-78ab09bfac0c\\\\n2026-01-29T16:10:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_27c11876-c5ae-4d35-b720-78ab09bfac0c to /host/opt/cni/bin/\\\\n2026-01-29T16:10:18Z [verbose] multus-daemon started\\\\n2026-01-29T16:10:18Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:11:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.451313 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.469629 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a65c2b0-9568-4a06-8073-93ec194b4ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89cb2f8d441042c4c95e3cc056f991565c18bd93dcb0d61f3735e2451ff439a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86530d623401c3cf5ebe44fcc1c2110a5f4dde059a5d27f74e118d5ca1df40dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86530d623401c3cf5ebe44fcc1c2110a5f4dde059a5d27f74e118d5ca1df40dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.492361 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.508810 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.529383 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.538128 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.538195 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.538217 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.538240 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.538253 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:14Z","lastTransitionTime":"2026-01-29T16:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.640834 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.640892 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.640910 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.640958 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.640975 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:14Z","lastTransitionTime":"2026-01-29T16:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.689691 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sbnkt_04b20f02-6c1e-4082-8233-8f06bda63195/ovnkube-controller/2.log" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.692874 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" event={"ID":"04b20f02-6c1e-4082-8233-8f06bda63195","Type":"ContainerStarted","Data":"95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365"} Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.694363 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.713738 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.775672 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.775722 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.775732 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.775748 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.775757 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:14Z","lastTransitionTime":"2026-01-29T16:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.781732 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.800245 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.821172 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a65c2b0-9568-4a06-8073-93ec194b4ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89cb2f8d441042c4c95e3cc056f991565c18bd93dcb0d61f3735e2451ff439a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86530d623401c3cf5ebe44fcc1c2110a5f4dde059a5d27f74e118d5ca1df40dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86530d623401c3cf5ebe44fcc1c2110a5f4dde059a5d27f74e118d5ca1df40dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.855056 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 09:01:16.247836598 +0000 UTC Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.856566 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.869212 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.878235 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.878266 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.878278 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.878316 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.878331 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:14Z","lastTransitionTime":"2026-01-29T16:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.894011 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d5d7851e659633c2e0da4975fa0e9e36d8d1f1b9d725661500f818f6268bcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:10:47Z\\\",\\\"message\\\":\\\"ing reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:10:47.314985 6375 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:10:47.315336 6375 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:10:47.315637 6375 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:10:47.315679 6375 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:10:47.315724 6375 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:10:47.315752 6375 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 16:10:47.315774 6375 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 16:10:47.315793 6375 factory.go:656] Stopping watch factory\\\\nI0129 16:10:47.315813 6375 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:10:47.315849 6375 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:10:47.315864 6375 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:10:47.315882 6375 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:10:47.315889 6375 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.906681 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2932c3bd-04c7-4494-8d43-03c4524a353f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78109e9e498f3d640934a2c45faf27133d32ece9e35e42ed48ddba720fa7a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc1d7b5f32fe8de4bfd5ba555838989293be538217424c86ea5cedadb8295f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tg8sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.918738 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89403a5-379d-4c3f-a87f-8d2ed63ab368\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f0bdc60ea5a5e188b2ed8894bb387084567b294c3a356fed01493d4bd6a7caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6068173079b1d5cf7de92fe34bd0a4701b11a4dba7ae384f2d0e33f185656107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6565dc4c35c618a239bdc2be6dd45a9057e83573c92c3fc816350eb014c822c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.929724 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.944969 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ec9086c2f128a0ee5f564692cf28086688a45cf9d6b328c33446ad8c6f2f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.955910 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.967109 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2w92b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"791456e8-8d95-4cdb-8fd1-d06a7586b328\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2w92b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.980527 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.980573 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.980584 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.980604 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.980615 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:14Z","lastTransitionTime":"2026-01-29T16:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:14 crc kubenswrapper[4714]: I0129 16:11:14.992899 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.009346 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.025228 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.039339 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c747ed61a18e27d63630395860ce896242426b1ce46ea5f9d00534b808804a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:11:03Z\\\",\\\"message\\\":\\\"2026-01-29T16:10:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_27c11876-c5ae-4d35-b720-78ab09bfac0c\\\\n2026-01-29T16:10:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_27c11876-c5ae-4d35-b720-78ab09bfac0c to /host/opt/cni/bin/\\\\n2026-01-29T16:10:18Z [verbose] multus-daemon started\\\\n2026-01-29T16:10:18Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:11:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.057976 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.078545 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.083586 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.083653 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.083675 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.083701 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.083718 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:15Z","lastTransitionTime":"2026-01-29T16:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.184079 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:15 crc kubenswrapper[4714]: E0129 16:11:15.185116 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.186045 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.186092 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.186104 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.186120 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.186133 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:15Z","lastTransitionTime":"2026-01-29T16:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.288732 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.288785 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.288800 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.288819 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.288831 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:15Z","lastTransitionTime":"2026-01-29T16:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.392486 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.393027 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.393050 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.393071 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.393085 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:15Z","lastTransitionTime":"2026-01-29T16:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.495789 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.495856 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.495870 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.495894 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.495911 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:15Z","lastTransitionTime":"2026-01-29T16:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.599404 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.599465 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.599480 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.599520 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.599537 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:15Z","lastTransitionTime":"2026-01-29T16:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.701130 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.701170 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.701182 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.701198 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.701209 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:15Z","lastTransitionTime":"2026-01-29T16:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.805428 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.805493 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.805512 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.805536 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.805552 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:15Z","lastTransitionTime":"2026-01-29T16:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.855994 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 01:39:29.579521206 +0000 UTC Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.909197 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.909238 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.909251 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.909268 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:15 crc kubenswrapper[4714]: I0129 16:11:15.909279 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:15Z","lastTransitionTime":"2026-01-29T16:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.012860 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.012970 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.012989 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.013014 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.013032 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:16Z","lastTransitionTime":"2026-01-29T16:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.116350 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.116424 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.116447 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.116472 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.116493 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:16Z","lastTransitionTime":"2026-01-29T16:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.183670 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.183710 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.183779 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:16 crc kubenswrapper[4714]: E0129 16:11:16.183835 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:16 crc kubenswrapper[4714]: E0129 16:11:16.183973 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:16 crc kubenswrapper[4714]: E0129 16:11:16.190227 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.223731 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.223791 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.223809 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.223835 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.223901 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:16Z","lastTransitionTime":"2026-01-29T16:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.332365 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.332703 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.332718 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.332738 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.332751 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:16Z","lastTransitionTime":"2026-01-29T16:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.435220 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.435256 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.435268 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.435284 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.435296 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:16Z","lastTransitionTime":"2026-01-29T16:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.538443 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.538536 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.538555 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.538584 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.538603 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:16Z","lastTransitionTime":"2026-01-29T16:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.641488 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.641533 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.641541 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.641556 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.641565 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:16Z","lastTransitionTime":"2026-01-29T16:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.703556 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sbnkt_04b20f02-6c1e-4082-8233-8f06bda63195/ovnkube-controller/3.log" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.704477 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sbnkt_04b20f02-6c1e-4082-8233-8f06bda63195/ovnkube-controller/2.log" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.708899 4714 generic.go:334] "Generic (PLEG): container finished" podID="04b20f02-6c1e-4082-8233-8f06bda63195" containerID="95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365" exitCode=1 Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.708993 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" event={"ID":"04b20f02-6c1e-4082-8233-8f06bda63195","Type":"ContainerDied","Data":"95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365"} Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.709074 4714 scope.go:117] "RemoveContainer" containerID="98d5d7851e659633c2e0da4975fa0e9e36d8d1f1b9d725661500f818f6268bcc" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.710088 4714 scope.go:117] "RemoveContainer" containerID="95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365" Jan 29 16:11:16 crc kubenswrapper[4714]: E0129 16:11:16.710417 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-sbnkt_openshift-ovn-kubernetes(04b20f02-6c1e-4082-8233-8f06bda63195)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.731924 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.745146 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.745214 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.745235 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.745264 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.745288 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:16Z","lastTransitionTime":"2026-01-29T16:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.768466 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d5d7851e659633c2e0da4975fa0e9e36d8d1f1b9d725661500f818f6268bcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:10:47Z\\\",\\\"message\\\":\\\"ing reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:10:47.314985 6375 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:10:47.315336 6375 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:10:47.315637 6375 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:10:47.315679 6375 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:10:47.315724 6375 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:10:47.315752 6375 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 16:10:47.315774 6375 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 16:10:47.315793 6375 factory.go:656] Stopping watch factory\\\\nI0129 16:10:47.315813 6375 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:10:47.315849 6375 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:10:47.315864 6375 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:10:47.315882 6375 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:10:47.315889 6375 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:11:16Z\\\",\\\"message\\\":\\\"] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:11:15.260241 6760 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:11:15.260645 6760 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16:11:15.261304 6760 obj_retry.go:551] Creating *factory.egressNode crc took: 14.368203ms\\\\nI0129 16:11:15.261337 6760 factory.go:1336] Added *v1.Node event handler 7\\\\nI0129 16:11:15.261379 6760 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0129 16:11:15.261465 6760 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:11:15.261482 6760 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:11:15.261500 6760 factory.go:656] Stopping watch factory\\\\nI0129 16:11:15.261522 6760 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:11:15.261535 6760 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:11:15.261674 6760 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0129 16:11:15.261875 6760 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0129 16:11:15.261909 6760 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:11:15.261936 6760 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 16:11:15.262026 6760 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.788270 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2932c3bd-04c7-4494-8d43-03c4524a353f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78109e9e498f3d640934a2c45faf27133d32ece9e35e42ed48ddba720fa7a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc1d7b5f32fe8de4bfd5ba555838989293be538217424c86ea5cedadb8295f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tg8sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.808486 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89403a5-379d-4c3f-a87f-8d2ed63ab368\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f0bdc60ea5a5e188b2ed8894bb387084567b294c3a356fed01493d4bd6a7caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6068173079b1d5cf7de92fe34bd0a4701b11a4dba7ae384f2d0e33f185656107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6565dc4c35c618a239bdc2be6dd45a9057e83573c92c3fc816350eb014c822c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.830821 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.848091 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.848184 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.848204 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.848680 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.848738 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:16Z","lastTransitionTime":"2026-01-29T16:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.856769 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 15:45:18.064304185 +0000 UTC Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.858406 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ec9086c2f128a0ee5f564692cf28086688a45cf9d6b328c33446ad8c6f2f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.877177 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.896727 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2w92b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"791456e8-8d95-4cdb-8fd1-d06a7586b328\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2w92b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.920932 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.942987 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.953002 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.953056 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.953071 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.953093 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.953110 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:16Z","lastTransitionTime":"2026-01-29T16:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.961241 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:16 crc kubenswrapper[4714]: I0129 16:11:16.984706 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c747ed61a18e27d63630395860ce896242426b1ce46ea5f9d00534b808804a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:11:03Z\\\",\\\"message\\\":\\\"2026-01-29T16:10:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_27c11876-c5ae-4d35-b720-78ab09bfac0c\\\\n2026-01-29T16:10:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_27c11876-c5ae-4d35-b720-78ab09bfac0c to /host/opt/cni/bin/\\\\n2026-01-29T16:10:18Z [verbose] multus-daemon started\\\\n2026-01-29T16:10:18Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:11:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:16Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.010465 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.032027 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.056171 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.057476 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.057522 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.057539 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.057564 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.057584 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:17Z","lastTransitionTime":"2026-01-29T16:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.081412 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.095123 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.110387 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a65c2b0-9568-4a06-8073-93ec194b4ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89cb2f8d441042c4c95e3cc056f991565c18bd93dcb0d61f3735e2451ff439a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86530d623401c3cf5ebe44fcc1c2110a5f4dde059a5d27f74e118d5ca1df40dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86530d623401c3cf5ebe44fcc1c2110a5f4dde059a5d27f74e118d5ca1df40dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.140829 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.160078 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.160138 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.160157 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.160182 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.160200 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:17Z","lastTransitionTime":"2026-01-29T16:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.183189 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:17 crc kubenswrapper[4714]: E0129 16:11:17.183367 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.263970 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.264054 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.264074 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.264100 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.264118 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:17Z","lastTransitionTime":"2026-01-29T16:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.367346 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.367410 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.367429 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.367454 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.367473 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:17Z","lastTransitionTime":"2026-01-29T16:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.470690 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.470785 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.470814 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.470846 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.470873 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:17Z","lastTransitionTime":"2026-01-29T16:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.573590 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.573673 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.573690 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.573714 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.573730 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:17Z","lastTransitionTime":"2026-01-29T16:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.676317 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.676355 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.676365 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.676382 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.676391 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:17Z","lastTransitionTime":"2026-01-29T16:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.779840 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.779904 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.779921 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.779983 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.780002 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:17Z","lastTransitionTime":"2026-01-29T16:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.857735 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 21:07:37.106953819 +0000 UTC Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.883357 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.883430 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.883441 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.883456 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.883466 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:17Z","lastTransitionTime":"2026-01-29T16:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.986323 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.986419 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.986442 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.986477 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:17 crc kubenswrapper[4714]: I0129 16:11:17.986504 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:17Z","lastTransitionTime":"2026-01-29T16:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.090513 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.090598 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.090620 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.090658 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.090683 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:18Z","lastTransitionTime":"2026-01-29T16:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.183651 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.183751 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:18 crc kubenswrapper[4714]: E0129 16:11:18.183830 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.183861 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:18 crc kubenswrapper[4714]: E0129 16:11:18.183998 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:18 crc kubenswrapper[4714]: E0129 16:11:18.184234 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.193499 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.193574 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.193597 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.193626 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.193647 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:18Z","lastTransitionTime":"2026-01-29T16:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.296301 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.296397 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.296425 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.296459 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.296483 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:18Z","lastTransitionTime":"2026-01-29T16:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.315755 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:11:18 crc kubenswrapper[4714]: E0129 16:11:18.316118 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:22.31608611 +0000 UTC m=+148.836587230 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.400374 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.400441 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.400461 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.400490 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.400507 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:18Z","lastTransitionTime":"2026-01-29T16:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.417410 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.417479 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.417516 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:18 crc kubenswrapper[4714]: E0129 16:11:18.417549 4714 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.417570 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:18 crc kubenswrapper[4714]: E0129 16:11:18.417639 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:22.417612243 +0000 UTC m=+148.938113403 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:11:18 crc kubenswrapper[4714]: E0129 16:11:18.417795 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:11:18 crc kubenswrapper[4714]: E0129 16:11:18.417835 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:11:18 crc kubenswrapper[4714]: E0129 16:11:18.417840 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:11:18 crc kubenswrapper[4714]: E0129 16:11:18.417862 4714 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:11:18 crc kubenswrapper[4714]: E0129 16:11:18.417870 4714 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:11:18 crc kubenswrapper[4714]: E0129 16:11:18.417874 4714 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:11:18 crc kubenswrapper[4714]: E0129 16:11:18.417983 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:22.417923592 +0000 UTC m=+148.938424752 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:11:18 crc kubenswrapper[4714]: E0129 16:11:18.417882 4714 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:11:18 crc kubenswrapper[4714]: E0129 16:11:18.418014 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:22.418001944 +0000 UTC m=+148.938503094 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:11:18 crc kubenswrapper[4714]: E0129 16:11:18.418062 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:22.418035365 +0000 UTC m=+148.938536515 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.504244 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.504313 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.504330 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.504357 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.504376 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:18Z","lastTransitionTime":"2026-01-29T16:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.607660 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.607845 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.607870 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.607916 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.607978 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:18Z","lastTransitionTime":"2026-01-29T16:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.712109 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.712175 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.712193 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.712218 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.712236 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:18Z","lastTransitionTime":"2026-01-29T16:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.716218 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sbnkt_04b20f02-6c1e-4082-8233-8f06bda63195/ovnkube-controller/3.log" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.826496 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.826543 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.826554 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.826572 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.826586 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:18Z","lastTransitionTime":"2026-01-29T16:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.857887 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 03:28:11.911023231 +0000 UTC Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.929640 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.929712 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.929736 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.929766 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:18 crc kubenswrapper[4714]: I0129 16:11:18.929787 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:18Z","lastTransitionTime":"2026-01-29T16:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.033219 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.033277 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.033294 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.033318 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.033336 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:19Z","lastTransitionTime":"2026-01-29T16:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.136771 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.136840 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.136857 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.136887 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.136910 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:19Z","lastTransitionTime":"2026-01-29T16:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.183633 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:19 crc kubenswrapper[4714]: E0129 16:11:19.183928 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.239638 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.239691 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.239710 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.239734 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.239792 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:19Z","lastTransitionTime":"2026-01-29T16:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.342916 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.343035 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.343056 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.343085 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.343108 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:19Z","lastTransitionTime":"2026-01-29T16:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.446208 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.446249 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.446296 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.446314 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.446327 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:19Z","lastTransitionTime":"2026-01-29T16:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.549374 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.549451 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.549474 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.549504 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.549527 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:19Z","lastTransitionTime":"2026-01-29T16:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.652793 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.653253 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.653271 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.653296 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.653313 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:19Z","lastTransitionTime":"2026-01-29T16:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.755976 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.756056 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.756083 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.756118 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.756141 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:19Z","lastTransitionTime":"2026-01-29T16:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.858010 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 08:26:03.331778385 +0000 UTC Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.859262 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.859313 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.859335 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.859365 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.859382 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:19Z","lastTransitionTime":"2026-01-29T16:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.966428 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.966517 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.966543 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.966576 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:19 crc kubenswrapper[4714]: I0129 16:11:19.966609 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:19Z","lastTransitionTime":"2026-01-29T16:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.070418 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.070494 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.070531 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.070562 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.070583 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:20Z","lastTransitionTime":"2026-01-29T16:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.174065 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.174148 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.174187 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.174220 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.174242 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:20Z","lastTransitionTime":"2026-01-29T16:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.183323 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.183363 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:20 crc kubenswrapper[4714]: E0129 16:11:20.183544 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.183629 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:20 crc kubenswrapper[4714]: E0129 16:11:20.183814 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:20 crc kubenswrapper[4714]: E0129 16:11:20.184004 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.276747 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.276824 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.276842 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.276867 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.276887 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:20Z","lastTransitionTime":"2026-01-29T16:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.379975 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.380040 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.380053 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.380073 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.380088 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:20Z","lastTransitionTime":"2026-01-29T16:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.483346 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.483416 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.483435 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.483461 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.483484 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:20Z","lastTransitionTime":"2026-01-29T16:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.586589 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.586632 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.586643 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.586660 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.586672 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:20Z","lastTransitionTime":"2026-01-29T16:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.690154 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.690211 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.690231 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.690254 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.690271 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:20Z","lastTransitionTime":"2026-01-29T16:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.793075 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.793138 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.793162 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.793192 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.793215 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:20Z","lastTransitionTime":"2026-01-29T16:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.858203 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 07:17:38.887233849 +0000 UTC Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.896644 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.896717 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.896741 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.896770 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:20 crc kubenswrapper[4714]: I0129 16:11:20.896792 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:20Z","lastTransitionTime":"2026-01-29T16:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.000345 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.000414 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.000435 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.000462 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.000484 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:21Z","lastTransitionTime":"2026-01-29T16:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.103743 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.103810 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.103829 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.103850 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.103865 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:21Z","lastTransitionTime":"2026-01-29T16:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.183214 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:21 crc kubenswrapper[4714]: E0129 16:11:21.183344 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.205902 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.205960 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.205979 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.205998 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.206009 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:21Z","lastTransitionTime":"2026-01-29T16:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.309091 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.309144 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.309157 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.309175 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.309187 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:21Z","lastTransitionTime":"2026-01-29T16:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.412440 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.412487 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.412500 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.412517 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.412528 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:21Z","lastTransitionTime":"2026-01-29T16:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.515769 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.515821 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.515834 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.515855 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.515868 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:21Z","lastTransitionTime":"2026-01-29T16:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.617924 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.617995 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.618007 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.618024 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.618038 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:21Z","lastTransitionTime":"2026-01-29T16:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.721224 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.721280 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.721295 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.721317 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.721333 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:21Z","lastTransitionTime":"2026-01-29T16:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.824326 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.824405 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.824430 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.824456 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.824477 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:21Z","lastTransitionTime":"2026-01-29T16:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.858339 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 14:29:59.858478558 +0000 UTC Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.927057 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.927127 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.927146 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.927173 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:21 crc kubenswrapper[4714]: I0129 16:11:21.927194 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:21Z","lastTransitionTime":"2026-01-29T16:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.030478 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.030552 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.030570 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.030595 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.030615 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:22Z","lastTransitionTime":"2026-01-29T16:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.136526 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.136578 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.136588 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.136609 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.136619 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:22Z","lastTransitionTime":"2026-01-29T16:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.184107 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.184202 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:22 crc kubenswrapper[4714]: E0129 16:11:22.184238 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.184334 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:22 crc kubenswrapper[4714]: E0129 16:11:22.184393 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:22 crc kubenswrapper[4714]: E0129 16:11:22.184472 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.238625 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.238652 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.238660 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.238670 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.238678 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:22Z","lastTransitionTime":"2026-01-29T16:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.341239 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.341310 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.341334 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.341361 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.341378 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:22Z","lastTransitionTime":"2026-01-29T16:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.444539 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.444587 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.444603 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.444625 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.444641 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:22Z","lastTransitionTime":"2026-01-29T16:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.547519 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.547592 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.547619 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.547650 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.547674 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:22Z","lastTransitionTime":"2026-01-29T16:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.650919 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.650997 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.651019 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.651046 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.651063 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:22Z","lastTransitionTime":"2026-01-29T16:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.712471 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.712517 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.712533 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.712550 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.712564 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:22Z","lastTransitionTime":"2026-01-29T16:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:22 crc kubenswrapper[4714]: E0129 16:11:22.729605 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.733891 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.733953 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.733965 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.733980 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.733989 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:22Z","lastTransitionTime":"2026-01-29T16:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:22 crc kubenswrapper[4714]: E0129 16:11:22.745813 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.749363 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.749435 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.749460 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.749493 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.749518 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:22Z","lastTransitionTime":"2026-01-29T16:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:22 crc kubenswrapper[4714]: E0129 16:11:22.770543 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.775742 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.775810 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.775833 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.775862 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.775886 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:22Z","lastTransitionTime":"2026-01-29T16:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:22 crc kubenswrapper[4714]: E0129 16:11:22.796625 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.801598 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.801885 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.802092 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.802253 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.802420 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:22Z","lastTransitionTime":"2026-01-29T16:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:22 crc kubenswrapper[4714]: E0129 16:11:22.817844 4714 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"856e4040-197b-4e74-9239-c0ebcf6976ae\\\",\\\"systemUUID\\\":\\\"1ab8f43b-7f84-4fd2-a80a-2aae14146bf4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:22 crc kubenswrapper[4714]: E0129 16:11:22.818005 4714 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.819888 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.819952 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.819974 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.819995 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.820008 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:22Z","lastTransitionTime":"2026-01-29T16:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.859395 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 08:19:51.843447223 +0000 UTC Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.922590 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.922668 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.922693 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.922722 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:22 crc kubenswrapper[4714]: I0129 16:11:22.922745 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:22Z","lastTransitionTime":"2026-01-29T16:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.025829 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.025901 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.025913 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.025959 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.025971 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:23Z","lastTransitionTime":"2026-01-29T16:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.127692 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.127751 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.127764 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.127782 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.127798 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:23Z","lastTransitionTime":"2026-01-29T16:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.183500 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:23 crc kubenswrapper[4714]: E0129 16:11:23.183682 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.230614 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.230699 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.230728 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.230760 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.230782 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:23Z","lastTransitionTime":"2026-01-29T16:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.333709 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.333779 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.333798 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.333823 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.333840 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:23Z","lastTransitionTime":"2026-01-29T16:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.437729 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.437793 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.437812 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.437836 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.437852 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:23Z","lastTransitionTime":"2026-01-29T16:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.540755 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.540841 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.540867 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.540897 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.540919 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:23Z","lastTransitionTime":"2026-01-29T16:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.646449 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.646508 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.646530 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.646555 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.646574 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:23Z","lastTransitionTime":"2026-01-29T16:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.749140 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.749203 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.749225 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.749255 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.749275 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:23Z","lastTransitionTime":"2026-01-29T16:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.851757 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.851822 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.851842 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.851868 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.851887 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:23Z","lastTransitionTime":"2026-01-29T16:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.860286 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 23:56:50.673855057 +0000 UTC Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.955653 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.955723 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.955744 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.955772 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:23 crc kubenswrapper[4714]: I0129 16:11:23.955793 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:23Z","lastTransitionTime":"2026-01-29T16:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.058737 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.058792 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.058804 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.058821 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.058833 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:24Z","lastTransitionTime":"2026-01-29T16:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.161679 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.161742 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.161759 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.161785 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.161802 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:24Z","lastTransitionTime":"2026-01-29T16:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.184073 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.184162 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.184224 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:24 crc kubenswrapper[4714]: E0129 16:11:24.184367 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:24 crc kubenswrapper[4714]: E0129 16:11:24.184666 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:24 crc kubenswrapper[4714]: E0129 16:11:24.184806 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.204027 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.221525 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a65c2b0-9568-4a06-8073-93ec194b4ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89cb2f8d441042c4c95e3cc056f991565c18bd93dcb0d61f3735e2451ff439a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86530d623401c3cf5ebe44fcc1c2110a5f4dde059a5d27f74e118d5ca1df40dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86530d623401c3cf5ebe44fcc1c2110a5f4dde059a5d27f74e118d5ca1df40dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.249838 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.264969 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.265011 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.265028 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.265047 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.265074 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:24Z","lastTransitionTime":"2026-01-29T16:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.268927 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.294240 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.310843 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2932c3bd-04c7-4494-8d43-03c4524a353f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78109e9e498f3d640934a2c45faf27133d32ece9e35e42ed48ddba720fa7a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc1d7b5f32fe8de4bfd5ba555838989293be538217424c86ea5cedadb8295f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tg8sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.325685 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89403a5-379d-4c3f-a87f-8d2ed63ab368\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f0bdc60ea5a5e188b2ed8894bb387084567b294c3a356fed01493d4bd6a7caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6068173079b1d5cf7de92fe34bd0a4701b11a4dba7ae384f2d0e33f185656107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6565dc4c35c618a239bdc2be6dd45a9057e83573c92c3fc816350eb014c822c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.340183 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.350328 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.367053 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.367098 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.367111 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.367129 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.367141 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:24Z","lastTransitionTime":"2026-01-29T16:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.371067 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d5d7851e659633c2e0da4975fa0e9e36d8d1f1b9d725661500f818f6268bcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:10:47Z\\\",\\\"message\\\":\\\"ing reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:10:47.314985 6375 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:10:47.315336 6375 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:10:47.315637 6375 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:10:47.315679 6375 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:10:47.315724 6375 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:10:47.315752 6375 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 16:10:47.315774 6375 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 16:10:47.315793 6375 factory.go:656] Stopping watch factory\\\\nI0129 16:10:47.315813 6375 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:10:47.315849 6375 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:10:47.315864 6375 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:10:47.315882 6375 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:10:47.315889 6375 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:11:16Z\\\",\\\"message\\\":\\\"] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:11:15.260241 6760 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:11:15.260645 6760 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16:11:15.261304 6760 obj_retry.go:551] Creating *factory.egressNode crc took: 14.368203ms\\\\nI0129 16:11:15.261337 6760 factory.go:1336] Added *v1.Node event handler 7\\\\nI0129 16:11:15.261379 6760 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0129 16:11:15.261465 6760 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:11:15.261482 6760 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:11:15.261500 6760 factory.go:656] Stopping watch factory\\\\nI0129 16:11:15.261522 6760 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:11:15.261535 6760 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:11:15.261674 6760 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0129 16:11:15.261875 6760 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0129 16:11:15.261909 6760 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:11:15.261936 6760 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 16:11:15.262026 6760 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.381834 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2w92b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"791456e8-8d95-4cdb-8fd1-d06a7586b328\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2w92b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.392984 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.405037 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.419768 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ec9086c2f128a0ee5f564692cf28086688a45cf9d6b328c33446ad8c6f2f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.429985 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.444227 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.459575 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.467910 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.468791 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.468813 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.468823 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.468838 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.468849 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:24Z","lastTransitionTime":"2026-01-29T16:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.477570 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c747ed61a18e27d63630395860ce896242426b1ce46ea5f9d00534b808804a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:11:03Z\\\",\\\"message\\\":\\\"2026-01-29T16:10:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_27c11876-c5ae-4d35-b720-78ab09bfac0c\\\\n2026-01-29T16:10:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_27c11876-c5ae-4d35-b720-78ab09bfac0c to /host/opt/cni/bin/\\\\n2026-01-29T16:10:18Z [verbose] multus-daemon started\\\\n2026-01-29T16:10:18Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:11:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.570847 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.570916 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.570967 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.570996 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.571017 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:24Z","lastTransitionTime":"2026-01-29T16:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.673249 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.673320 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.673341 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.673401 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.673423 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:24Z","lastTransitionTime":"2026-01-29T16:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.776130 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.776171 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.776181 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.776196 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.776208 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:24Z","lastTransitionTime":"2026-01-29T16:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.860618 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 18:42:32.7015951 +0000 UTC Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.880027 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.880082 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.880106 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.880377 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.880442 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:24Z","lastTransitionTime":"2026-01-29T16:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.983678 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.983753 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.983772 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.983795 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:24 crc kubenswrapper[4714]: I0129 16:11:24.983812 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:24Z","lastTransitionTime":"2026-01-29T16:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.087121 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.087184 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.087210 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.087239 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.087261 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:25Z","lastTransitionTime":"2026-01-29T16:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.184030 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:25 crc kubenswrapper[4714]: E0129 16:11:25.184227 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.190756 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.190818 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.190837 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.190859 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.190875 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:25Z","lastTransitionTime":"2026-01-29T16:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.294254 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.294329 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.294350 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.294381 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.294404 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:25Z","lastTransitionTime":"2026-01-29T16:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.396440 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.396501 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.396511 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.396524 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.396533 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:25Z","lastTransitionTime":"2026-01-29T16:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.500340 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.500379 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.500389 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.500404 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.500414 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:25Z","lastTransitionTime":"2026-01-29T16:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.603123 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.603269 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.603295 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.603374 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.603454 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:25Z","lastTransitionTime":"2026-01-29T16:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.706666 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.706710 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.706719 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.706735 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.706744 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:25Z","lastTransitionTime":"2026-01-29T16:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.809163 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.809254 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.809267 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.809285 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.809297 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:25Z","lastTransitionTime":"2026-01-29T16:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.861601 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 12:21:39.719320224 +0000 UTC Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.917379 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.917446 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.917471 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.917500 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:25 crc kubenswrapper[4714]: I0129 16:11:25.917521 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:25Z","lastTransitionTime":"2026-01-29T16:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.019893 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.019978 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.019997 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.020021 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.020038 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:26Z","lastTransitionTime":"2026-01-29T16:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.123373 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.123435 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.123463 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.123486 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.123501 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:26Z","lastTransitionTime":"2026-01-29T16:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.184258 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.184315 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.184388 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:26 crc kubenswrapper[4714]: E0129 16:11:26.184471 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:26 crc kubenswrapper[4714]: E0129 16:11:26.184628 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:26 crc kubenswrapper[4714]: E0129 16:11:26.185065 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.225973 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.226045 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.226070 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.226105 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.226128 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:26Z","lastTransitionTime":"2026-01-29T16:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.329763 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.329851 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.329878 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.329912 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.329969 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:26Z","lastTransitionTime":"2026-01-29T16:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.432163 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.432239 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.432263 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.432295 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.432317 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:26Z","lastTransitionTime":"2026-01-29T16:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.534954 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.535030 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.535047 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.535068 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.535083 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:26Z","lastTransitionTime":"2026-01-29T16:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.638500 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.638554 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.638567 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.638586 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.638599 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:26Z","lastTransitionTime":"2026-01-29T16:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.741857 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.741919 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.741988 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.742015 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.742033 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:26Z","lastTransitionTime":"2026-01-29T16:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.845348 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.845432 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.845459 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.845490 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.845512 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:26Z","lastTransitionTime":"2026-01-29T16:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.862692 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 14:24:11.552722654 +0000 UTC Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.947780 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.947832 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.947849 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.947874 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:26 crc kubenswrapper[4714]: I0129 16:11:26.947891 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:26Z","lastTransitionTime":"2026-01-29T16:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.051283 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.051319 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.051329 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.051344 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.051354 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:27Z","lastTransitionTime":"2026-01-29T16:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.154269 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.154333 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.154351 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.154374 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.154392 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:27Z","lastTransitionTime":"2026-01-29T16:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.184191 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:27 crc kubenswrapper[4714]: E0129 16:11:27.184459 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.257633 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.257690 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.257706 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.257728 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.258021 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:27Z","lastTransitionTime":"2026-01-29T16:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.360748 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.360833 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.360855 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.360885 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.360926 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:27Z","lastTransitionTime":"2026-01-29T16:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.464637 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.464708 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.464732 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.464760 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.464783 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:27Z","lastTransitionTime":"2026-01-29T16:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.568362 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.568427 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.568444 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.568469 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.568488 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:27Z","lastTransitionTime":"2026-01-29T16:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.671842 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.671907 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.671926 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.671992 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.672010 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:27Z","lastTransitionTime":"2026-01-29T16:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.774674 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.774749 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.774773 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.774804 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.774829 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:27Z","lastTransitionTime":"2026-01-29T16:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.862912 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 23:58:49.767703108 +0000 UTC Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.878898 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.878993 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.879013 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.879036 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.879050 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:27Z","lastTransitionTime":"2026-01-29T16:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.982196 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.982297 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.982319 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.982345 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:27 crc kubenswrapper[4714]: I0129 16:11:27.982362 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:27Z","lastTransitionTime":"2026-01-29T16:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.085790 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.085855 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.085873 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.085900 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.085918 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:28Z","lastTransitionTime":"2026-01-29T16:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.184006 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.184119 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.184035 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:28 crc kubenswrapper[4714]: E0129 16:11:28.184270 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:28 crc kubenswrapper[4714]: E0129 16:11:28.184267 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:28 crc kubenswrapper[4714]: E0129 16:11:28.184473 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.188080 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.188188 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.188257 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.188296 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.188364 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:28Z","lastTransitionTime":"2026-01-29T16:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.291708 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.291778 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.291802 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.291827 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.291847 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:28Z","lastTransitionTime":"2026-01-29T16:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.394861 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.394910 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.394922 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.394961 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.394974 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:28Z","lastTransitionTime":"2026-01-29T16:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.497570 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.497622 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.497652 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.497672 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.497682 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:28Z","lastTransitionTime":"2026-01-29T16:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.601200 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.601296 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.601320 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.601353 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.601377 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:28Z","lastTransitionTime":"2026-01-29T16:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.705139 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.705210 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.705230 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.705255 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.705273 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:28Z","lastTransitionTime":"2026-01-29T16:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.807673 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.807729 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.807752 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.807779 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.807800 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:28Z","lastTransitionTime":"2026-01-29T16:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.863464 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 05:26:17.802506184 +0000 UTC Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.910628 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.910790 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.910819 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.910895 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:28 crc kubenswrapper[4714]: I0129 16:11:28.910921 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:28Z","lastTransitionTime":"2026-01-29T16:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.014336 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.014408 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.014431 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.014462 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.014486 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:29Z","lastTransitionTime":"2026-01-29T16:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.117635 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.117700 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.117716 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.117740 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.117759 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:29Z","lastTransitionTime":"2026-01-29T16:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.184155 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:29 crc kubenswrapper[4714]: E0129 16:11:29.184372 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.185106 4714 scope.go:117] "RemoveContainer" containerID="95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365" Jan 29 16:11:29 crc kubenswrapper[4714]: E0129 16:11:29.185299 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-sbnkt_openshift-ovn-kubernetes(04b20f02-6c1e-4082-8233-8f06bda63195)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.205452 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89403a5-379d-4c3f-a87f-8d2ed63ab368\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f0bdc60ea5a5e188b2ed8894bb387084567b294c3a356fed01493d4bd6a7caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6068173079b1d5cf7de92fe34bd0a4701b11a4dba7ae384f2d0e33f185656107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6565dc4c35c618a239bdc2be6dd45a9057e83573c92c3fc816350eb014c822c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2667196228a62c452b7eaff0103e9bd92e88829c669c109b415b2fe28bb8cec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.220883 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.220984 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.221004 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.221031 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.221049 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:29Z","lastTransitionTime":"2026-01-29T16:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.226067 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.243113 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8c765f3-89eb-4077-8829-03e86eb0c90c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec263dd306a333c63e0672bd5b5a5bf7cd7814c2c51bb480aac7c8e35591d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ppngk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.275093 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04b20f02-6c1e-4082-8233-8f06bda63195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:11:16Z\\\",\\\"message\\\":\\\"] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:11:15.260241 6760 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:11:15.260645 6760 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16:11:15.261304 6760 obj_retry.go:551] Creating *factory.egressNode crc took: 14.368203ms\\\\nI0129 16:11:15.261337 6760 factory.go:1336] Added *v1.Node event handler 7\\\\nI0129 16:11:15.261379 6760 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0129 16:11:15.261465 6760 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:11:15.261482 6760 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:11:15.261500 6760 factory.go:656] Stopping watch factory\\\\nI0129 16:11:15.261522 6760 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:11:15.261535 6760 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:11:15.261674 6760 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0129 16:11:15.261875 6760 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0129 16:11:15.261909 6760 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:11:15.261936 6760 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 16:11:15.262026 6760 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-sbnkt_openshift-ovn-kubernetes(04b20f02-6c1e-4082-8233-8f06bda63195)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vrsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sbnkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.293402 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2932c3bd-04c7-4494-8d43-03c4524a353f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78109e9e498f3d640934a2c45faf27133d32ece9e35e42ed48ddba720fa7a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc1d7b5f32fe8de4bfd5ba555838989293be538217424c86ea5cedadb8295f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvrtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tg8sw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.314249 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.324211 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.324459 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.324627 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.324802 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.324981 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:29Z","lastTransitionTime":"2026-01-29T16:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.334662 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.366391 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b20fd8d-1ebb-47d0-8676-403b99dac1ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ec9086c2f128a0ee5f564692cf28086688a45cf9d6b328c33446ad8c6f2f4d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19947ad8dfd71d43fffc2a5975a0d1663180736ca519dc2a1c4bafd17cbcc76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7164e1bbcac4ad832627cb9036637cc92f4b4285831e6a04abfd4fd0904e21a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312a225d18b0eb9ae91f8a87303ef4c896c6aa7435fcf4485d8e2eda65a474ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37527bb32bfd42ed38400659a3b3df47f711df111fc318cd6ca42e4756408df7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93629f3fad3c74e2736307cf494b6301e63f371189d21fb879d22e1535a31a59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ca06d57822f3e105b504240bdddd2d1b5c0b6650afcd0268cd1fa71766687d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:10:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkc7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2cfxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.380918 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9jhc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f80aba4c-9372-4bea-b537-cbd9b0a3e972\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbc0e24a214494b56ffc8998f30296ecb7f846af6ba355b1a7ced0612f34143e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-th2m6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9jhc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.396839 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2w92b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"791456e8-8d95-4cdb-8fd1-d06a7586b328\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv7p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2w92b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.420727 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:10:08Z\\\",\\\"message\\\":\\\"W0129 16:09:57.450571 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:09:57.450796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769702997 cert, and key in /tmp/serving-cert-976236263/serving-signer.crt, /tmp/serving-cert-976236263/serving-signer.key\\\\nI0129 16:09:57.732705 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:09:57.737820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:09:57.738179 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:09:57.739209 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-976236263/tls.crt::/tmp/serving-cert-976236263/tls.key\\\\\\\"\\\\nF0129 16:10:08.232982 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.429243 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.429326 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.429352 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.429384 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.429408 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:29Z","lastTransitionTime":"2026-01-29T16:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.452277 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56dc350a62212e8fe3f0e111610fd5fb645582503be465949ea0e97e9dab1a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ea000eb0a41ef1ceab485441cd6ad2c665f81f0758597f166c693f23ae3315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.469800 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-46dqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f846b283-5468-4014-ba05-da5bfffa2ebc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4885f484b167e32a8d0767759d7c131508ad3c99a5019b237912064c8152510c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbd9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-46dqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.491591 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b2ttm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89560008-8bdc-4640-af11-681d825e69d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c747ed61a18e27d63630395860ce896242426b1ce46ea5f9d00534b808804a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:11:03Z\\\",\\\"message\\\":\\\"2026-01-29T16:10:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_27c11876-c5ae-4d35-b720-78ab09bfac0c\\\\n2026-01-29T16:10:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_27c11876-c5ae-4d35-b720-78ab09bfac0c to /host/opt/cni/bin/\\\\n2026-01-29T16:10:18Z [verbose] multus-daemon started\\\\n2026-01-29T16:10:18Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:11:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:10:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp6mh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:10:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b2ttm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.506084 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a65c2b0-9568-4a06-8073-93ec194b4ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89cb2f8d441042c4c95e3cc056f991565c18bd93dcb0d61f3735e2451ff439a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86530d623401c3cf5ebe44fcc1c2110a5f4dde059a5d27f74e118d5ca1df40dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86530d623401c3cf5ebe44fcc1c2110a5f4dde059a5d27f74e118d5ca1df40dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.532167 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.532224 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.532243 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.532281 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.532306 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:29Z","lastTransitionTime":"2026-01-29T16:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.536908 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb99202-b98d-4b54-bec6-9e6a90d5cbd5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4ddd8695e3d9e8fbc9ce0cd1d83ee134a3cd1377940f4b8763ce0999b185f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://988169eacf2bf87759260b2f9a1a8786b0bdfb3fd2c0b4f4ea2425d1eaa5ccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7960e9e1ccf45e585a4a9610e1f5684caa0d939bc553335f1563ea5f1408346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c9bf119056f20eb63b1c55993b4e5e4fbce0ef1e1a0fc20c49047eb9c2af1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d42b2248c67787b28fa9139907de67ddd709e032ea2a495acbc0a5c2f2a109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b585d5aaa0cedba5a448db0c17eb71e468c5b8b091f0c61767217f5a949c8de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f5ccea52820f488eae0c05e6a08b7e1ff0374f48484107886cf8e45e064965c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f73971e4ba98224ed462ff8491a7fd41dbb7c64a15149330a435d91eefd6d334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.558311 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e7a70f-3b2d-4ee9-b9b9-160724395d19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f890bfe59f701daca9b7eb40154d970341a9bdba499dc934e091e90a4a30c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5dfdef978b46c3177ff0c01197e9d43fd086408ff2ed4f81199581bcf6a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c5cff6518bf6978dfdf9c3e8ec2fee7b23911d399d5467f91cda5da792f995\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:09:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.580323 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13410896f482e837a870858f589327c25b1e85d9dd6f567853a8ff6aec87294b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.597024 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:10:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79d7d2d8c31924f92e593dd3fd079569b0dc714c7b9d517961ade71c470cf8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:10:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.635344 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.635412 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.635430 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.635455 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.635475 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:29Z","lastTransitionTime":"2026-01-29T16:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.738968 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.739052 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.739080 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.739113 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.739135 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:29Z","lastTransitionTime":"2026-01-29T16:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.842833 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.842896 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.842911 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.842960 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.842974 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:29Z","lastTransitionTime":"2026-01-29T16:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.864435 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 12:19:04.312450248 +0000 UTC Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.945824 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.945866 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.945876 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.945891 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:29 crc kubenswrapper[4714]: I0129 16:11:29.945904 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:29Z","lastTransitionTime":"2026-01-29T16:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.048903 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.049077 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.049097 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.049127 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.049148 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:30Z","lastTransitionTime":"2026-01-29T16:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.152147 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.152441 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.152518 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.152601 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.152657 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:30Z","lastTransitionTime":"2026-01-29T16:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.183385 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.183501 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.183538 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:30 crc kubenswrapper[4714]: E0129 16:11:30.183595 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:30 crc kubenswrapper[4714]: E0129 16:11:30.183895 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:30 crc kubenswrapper[4714]: E0129 16:11:30.184292 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.256165 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.256232 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.256250 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.256272 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.256290 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:30Z","lastTransitionTime":"2026-01-29T16:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.359334 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.359391 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.359409 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.359433 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.359454 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:30Z","lastTransitionTime":"2026-01-29T16:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.462778 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.462864 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.462889 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.462920 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.462988 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:30Z","lastTransitionTime":"2026-01-29T16:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.566207 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.566516 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.566601 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.566696 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.566783 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:30Z","lastTransitionTime":"2026-01-29T16:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.669895 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.670377 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.670561 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.670716 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.670962 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:30Z","lastTransitionTime":"2026-01-29T16:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.773739 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.773783 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.773795 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.773813 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.773827 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:30Z","lastTransitionTime":"2026-01-29T16:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.865016 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 06:56:06.535804265 +0000 UTC Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.876830 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.877147 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.877289 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.877429 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.877577 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:30Z","lastTransitionTime":"2026-01-29T16:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.980527 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.980976 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.981175 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.981393 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:30 crc kubenswrapper[4714]: I0129 16:11:30.981570 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:30Z","lastTransitionTime":"2026-01-29T16:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.085765 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.085852 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.086098 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.086148 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.086176 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:31Z","lastTransitionTime":"2026-01-29T16:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.184347 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:31 crc kubenswrapper[4714]: E0129 16:11:31.185623 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.189960 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.190068 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.190089 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.190153 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.190173 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:31Z","lastTransitionTime":"2026-01-29T16:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.293463 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.293534 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.293558 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.293584 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.293601 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:31Z","lastTransitionTime":"2026-01-29T16:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.396371 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.396442 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.396458 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.396481 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.396497 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:31Z","lastTransitionTime":"2026-01-29T16:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.500523 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.500614 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.500643 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.500685 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.500709 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:31Z","lastTransitionTime":"2026-01-29T16:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.604109 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.604184 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.604208 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.604241 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.604265 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:31Z","lastTransitionTime":"2026-01-29T16:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.706427 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.706473 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.706485 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.706500 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.706510 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:31Z","lastTransitionTime":"2026-01-29T16:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.809769 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.809862 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.809874 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.809893 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.809905 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:31Z","lastTransitionTime":"2026-01-29T16:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.865918 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 04:03:46.137696162 +0000 UTC Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.913224 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.913256 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.913264 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.913278 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:31 crc kubenswrapper[4714]: I0129 16:11:31.913286 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:31Z","lastTransitionTime":"2026-01-29T16:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.016656 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.016731 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.016756 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.016786 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.016803 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:32Z","lastTransitionTime":"2026-01-29T16:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.120020 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.120482 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.120688 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.120830 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.120997 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:32Z","lastTransitionTime":"2026-01-29T16:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.184057 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.184092 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:32 crc kubenswrapper[4714]: E0129 16:11:32.184264 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.184359 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:32 crc kubenswrapper[4714]: E0129 16:11:32.184406 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:32 crc kubenswrapper[4714]: E0129 16:11:32.184739 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.224800 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.224867 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.224886 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.224910 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.224966 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:32Z","lastTransitionTime":"2026-01-29T16:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.331563 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.331645 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.331671 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.331701 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.331720 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:32Z","lastTransitionTime":"2026-01-29T16:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.435812 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.435869 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.435880 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.435909 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.435923 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:32Z","lastTransitionTime":"2026-01-29T16:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.542066 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.542104 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.542117 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.542132 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.542144 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:32Z","lastTransitionTime":"2026-01-29T16:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.652686 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.652751 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.652763 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.652780 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.652791 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:32Z","lastTransitionTime":"2026-01-29T16:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.754485 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.754547 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.754557 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.754571 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.754582 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:32Z","lastTransitionTime":"2026-01-29T16:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.857059 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.857101 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.857110 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.857130 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.857142 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:32Z","lastTransitionTime":"2026-01-29T16:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.866174 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 07:38:31.408393991 +0000 UTC Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.880087 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.880131 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.880142 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.880160 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.880170 4714 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:11:32Z","lastTransitionTime":"2026-01-29T16:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.984417 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-dvwfr"] Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.984919 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dvwfr" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.986512 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.986596 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.988562 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 16:11:32 crc kubenswrapper[4714]: I0129 16:11:32.988707 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.008074 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-2cfxk" podStartSLOduration=79.008051164 podStartE2EDuration="1m19.008051164s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:11:33.007958141 +0000 UTC m=+99.528459271" watchObservedRunningTime="2026-01-29 16:11:33.008051164 +0000 UTC m=+99.528552284" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.033056 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-c9jhc" podStartSLOduration=79.033036883 podStartE2EDuration="1m19.033036883s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:11:33.018639388 +0000 UTC m=+99.539140508" watchObservedRunningTime="2026-01-29 16:11:33.033036883 +0000 UTC m=+99.553538003" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.070860 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-46dqc" podStartSLOduration=79.070842391 podStartE2EDuration="1m19.070842391s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:11:33.070750439 +0000 UTC m=+99.591251559" watchObservedRunningTime="2026-01-29 16:11:33.070842391 +0000 UTC m=+99.591343511" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.081234 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6122b3fa-a2a7-4328-a057-ee8692c5dc83-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-dvwfr\" (UID: \"6122b3fa-a2a7-4328-a057-ee8692c5dc83\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dvwfr" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.081270 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6122b3fa-a2a7-4328-a057-ee8692c5dc83-service-ca\") pod \"cluster-version-operator-5c965bbfc6-dvwfr\" (UID: \"6122b3fa-a2a7-4328-a057-ee8692c5dc83\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dvwfr" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.081317 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6122b3fa-a2a7-4328-a057-ee8692c5dc83-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-dvwfr\" (UID: \"6122b3fa-a2a7-4328-a057-ee8692c5dc83\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dvwfr" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.081336 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6122b3fa-a2a7-4328-a057-ee8692c5dc83-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-dvwfr\" (UID: \"6122b3fa-a2a7-4328-a057-ee8692c5dc83\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dvwfr" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.081359 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6122b3fa-a2a7-4328-a057-ee8692c5dc83-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-dvwfr\" (UID: \"6122b3fa-a2a7-4328-a057-ee8692c5dc83\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dvwfr" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.097620 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-b2ttm" podStartSLOduration=79.097604792 podStartE2EDuration="1m19.097604792s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:11:33.082752654 +0000 UTC m=+99.603253774" watchObservedRunningTime="2026-01-29 16:11:33.097604792 +0000 UTC m=+99.618105902" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.111256 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=80.111236234 podStartE2EDuration="1m20.111236234s" podCreationTimestamp="2026-01-29 16:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:11:33.098091296 +0000 UTC m=+99.618592416" watchObservedRunningTime="2026-01-29 16:11:33.111236234 +0000 UTC m=+99.631737364" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.140127 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=79.140110336 podStartE2EDuration="1m19.140110336s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:11:33.12463544 +0000 UTC m=+99.645136570" watchObservedRunningTime="2026-01-29 16:11:33.140110336 +0000 UTC m=+99.660611456" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.162020 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=26.162002646 podStartE2EDuration="26.162002646s" podCreationTimestamp="2026-01-29 16:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:11:33.160696408 +0000 UTC m=+99.681197528" watchObservedRunningTime="2026-01-29 16:11:33.162002646 +0000 UTC m=+99.682503766" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.182509 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6122b3fa-a2a7-4328-a057-ee8692c5dc83-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-dvwfr\" (UID: \"6122b3fa-a2a7-4328-a057-ee8692c5dc83\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dvwfr" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.182824 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6122b3fa-a2a7-4328-a057-ee8692c5dc83-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-dvwfr\" (UID: \"6122b3fa-a2a7-4328-a057-ee8692c5dc83\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dvwfr" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.182920 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6122b3fa-a2a7-4328-a057-ee8692c5dc83-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-dvwfr\" (UID: \"6122b3fa-a2a7-4328-a057-ee8692c5dc83\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dvwfr" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.183102 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6122b3fa-a2a7-4328-a057-ee8692c5dc83-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-dvwfr\" (UID: \"6122b3fa-a2a7-4328-a057-ee8692c5dc83\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dvwfr" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.183228 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6122b3fa-a2a7-4328-a057-ee8692c5dc83-service-ca\") pod \"cluster-version-operator-5c965bbfc6-dvwfr\" (UID: \"6122b3fa-a2a7-4328-a057-ee8692c5dc83\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dvwfr" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.183295 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.183447 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6122b3fa-a2a7-4328-a057-ee8692c5dc83-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-dvwfr\" (UID: \"6122b3fa-a2a7-4328-a057-ee8692c5dc83\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dvwfr" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.183580 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6122b3fa-a2a7-4328-a057-ee8692c5dc83-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-dvwfr\" (UID: \"6122b3fa-a2a7-4328-a057-ee8692c5dc83\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dvwfr" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.184044 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=77.1840359 podStartE2EDuration="1m17.1840359s" podCreationTimestamp="2026-01-29 16:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:11:33.183543006 +0000 UTC m=+99.704044136" watchObservedRunningTime="2026-01-29 16:11:33.1840359 +0000 UTC m=+99.704537020" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.184259 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6122b3fa-a2a7-4328-a057-ee8692c5dc83-service-ca\") pod \"cluster-version-operator-5c965bbfc6-dvwfr\" (UID: \"6122b3fa-a2a7-4328-a057-ee8692c5dc83\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dvwfr" Jan 29 16:11:33 crc kubenswrapper[4714]: E0129 16:11:33.184375 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.189615 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6122b3fa-a2a7-4328-a057-ee8692c5dc83-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-dvwfr\" (UID: \"6122b3fa-a2a7-4328-a057-ee8692c5dc83\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dvwfr" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.199902 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6122b3fa-a2a7-4328-a057-ee8692c5dc83-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-dvwfr\" (UID: \"6122b3fa-a2a7-4328-a057-ee8692c5dc83\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dvwfr" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.220326 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podStartSLOduration=79.220309325 podStartE2EDuration="1m19.220309325s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:11:33.199268679 +0000 UTC m=+99.719769799" watchObservedRunningTime="2026-01-29 16:11:33.220309325 +0000 UTC m=+99.740810445" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.246999 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tg8sw" podStartSLOduration=78.246983642 podStartE2EDuration="1m18.246983642s" podCreationTimestamp="2026-01-29 16:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:11:33.24583672 +0000 UTC m=+99.766337840" watchObservedRunningTime="2026-01-29 16:11:33.246983642 +0000 UTC m=+99.767484762" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.267826 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=52.267805742 podStartE2EDuration="52.267805742s" podCreationTimestamp="2026-01-29 16:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:11:33.26739018 +0000 UTC m=+99.787891310" watchObservedRunningTime="2026-01-29 16:11:33.267805742 +0000 UTC m=+99.788306862" Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.301002 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dvwfr" Jan 29 16:11:33 crc kubenswrapper[4714]: W0129 16:11:33.317056 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6122b3fa_a2a7_4328_a057_ee8692c5dc83.slice/crio-ec683597ef95379a94031b5c628e9e0cf98cca1f2833bf03c7c91470ae7b3100 WatchSource:0}: Error finding container ec683597ef95379a94031b5c628e9e0cf98cca1f2833bf03c7c91470ae7b3100: Status 404 returned error can't find the container with id ec683597ef95379a94031b5c628e9e0cf98cca1f2833bf03c7c91470ae7b3100 Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.780236 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dvwfr" event={"ID":"6122b3fa-a2a7-4328-a057-ee8692c5dc83","Type":"ContainerStarted","Data":"ec683597ef95379a94031b5c628e9e0cf98cca1f2833bf03c7c91470ae7b3100"} Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.866377 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 11:55:54.017004966 +0000 UTC Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.866700 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 29 16:11:33 crc kubenswrapper[4714]: I0129 16:11:33.875468 4714 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 16:11:34 crc kubenswrapper[4714]: I0129 16:11:34.093351 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/791456e8-8d95-4cdb-8fd1-d06a7586b328-metrics-certs\") pod \"network-metrics-daemon-2w92b\" (UID: \"791456e8-8d95-4cdb-8fd1-d06a7586b328\") " pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:34 crc kubenswrapper[4714]: E0129 16:11:34.093500 4714 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:11:34 crc kubenswrapper[4714]: E0129 16:11:34.093548 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/791456e8-8d95-4cdb-8fd1-d06a7586b328-metrics-certs podName:791456e8-8d95-4cdb-8fd1-d06a7586b328 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:38.093532375 +0000 UTC m=+164.614033505 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/791456e8-8d95-4cdb-8fd1-d06a7586b328-metrics-certs") pod "network-metrics-daemon-2w92b" (UID: "791456e8-8d95-4cdb-8fd1-d06a7586b328") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:11:34 crc kubenswrapper[4714]: I0129 16:11:34.183680 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:34 crc kubenswrapper[4714]: I0129 16:11:34.183736 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:34 crc kubenswrapper[4714]: E0129 16:11:34.184708 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:34 crc kubenswrapper[4714]: I0129 16:11:34.184806 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:34 crc kubenswrapper[4714]: E0129 16:11:34.184984 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:34 crc kubenswrapper[4714]: E0129 16:11:34.185257 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:34 crc kubenswrapper[4714]: I0129 16:11:34.787127 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dvwfr" event={"ID":"6122b3fa-a2a7-4328-a057-ee8692c5dc83","Type":"ContainerStarted","Data":"5d50732e6565a1ea71d386c131b1504da8f37e3b89c4683cff6fd99dd3fbb374"} Jan 29 16:11:34 crc kubenswrapper[4714]: I0129 16:11:34.804754 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dvwfr" podStartSLOduration=80.804734341 podStartE2EDuration="1m20.804734341s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:11:34.803004501 +0000 UTC m=+101.323505631" watchObservedRunningTime="2026-01-29 16:11:34.804734341 +0000 UTC m=+101.325235471" Jan 29 16:11:35 crc kubenswrapper[4714]: I0129 16:11:35.183684 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:35 crc kubenswrapper[4714]: E0129 16:11:35.183922 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:36 crc kubenswrapper[4714]: I0129 16:11:36.183564 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:36 crc kubenswrapper[4714]: I0129 16:11:36.183679 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:36 crc kubenswrapper[4714]: I0129 16:11:36.183588 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:36 crc kubenswrapper[4714]: E0129 16:11:36.183846 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:36 crc kubenswrapper[4714]: E0129 16:11:36.183982 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:36 crc kubenswrapper[4714]: E0129 16:11:36.184365 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:37 crc kubenswrapper[4714]: I0129 16:11:37.184077 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:37 crc kubenswrapper[4714]: E0129 16:11:37.184259 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:38 crc kubenswrapper[4714]: I0129 16:11:38.185263 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:38 crc kubenswrapper[4714]: I0129 16:11:38.185365 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:38 crc kubenswrapper[4714]: I0129 16:11:38.186218 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:38 crc kubenswrapper[4714]: E0129 16:11:38.186377 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:38 crc kubenswrapper[4714]: E0129 16:11:38.186653 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:38 crc kubenswrapper[4714]: E0129 16:11:38.186869 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:39 crc kubenswrapper[4714]: I0129 16:11:39.183427 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:39 crc kubenswrapper[4714]: E0129 16:11:39.184001 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:40 crc kubenswrapper[4714]: I0129 16:11:40.183151 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:40 crc kubenswrapper[4714]: I0129 16:11:40.183266 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:40 crc kubenswrapper[4714]: I0129 16:11:40.183583 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:40 crc kubenswrapper[4714]: E0129 16:11:40.183297 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:40 crc kubenswrapper[4714]: E0129 16:11:40.183478 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:40 crc kubenswrapper[4714]: E0129 16:11:40.183831 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:41 crc kubenswrapper[4714]: I0129 16:11:41.183762 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:41 crc kubenswrapper[4714]: E0129 16:11:41.184028 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:42 crc kubenswrapper[4714]: I0129 16:11:42.183815 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:42 crc kubenswrapper[4714]: I0129 16:11:42.183870 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:42 crc kubenswrapper[4714]: I0129 16:11:42.183842 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:42 crc kubenswrapper[4714]: E0129 16:11:42.184056 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:42 crc kubenswrapper[4714]: E0129 16:11:42.184263 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:42 crc kubenswrapper[4714]: E0129 16:11:42.184349 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:43 crc kubenswrapper[4714]: I0129 16:11:43.183831 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:43 crc kubenswrapper[4714]: E0129 16:11:43.184074 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:44 crc kubenswrapper[4714]: I0129 16:11:44.184073 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:44 crc kubenswrapper[4714]: I0129 16:11:44.184215 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:44 crc kubenswrapper[4714]: E0129 16:11:44.187297 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:44 crc kubenswrapper[4714]: I0129 16:11:44.187351 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:44 crc kubenswrapper[4714]: E0129 16:11:44.187535 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:44 crc kubenswrapper[4714]: E0129 16:11:44.187707 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:44 crc kubenswrapper[4714]: I0129 16:11:44.189016 4714 scope.go:117] "RemoveContainer" containerID="95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365" Jan 29 16:11:44 crc kubenswrapper[4714]: E0129 16:11:44.189324 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-sbnkt_openshift-ovn-kubernetes(04b20f02-6c1e-4082-8233-8f06bda63195)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" Jan 29 16:11:45 crc kubenswrapper[4714]: I0129 16:11:45.184012 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:45 crc kubenswrapper[4714]: E0129 16:11:45.184254 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:46 crc kubenswrapper[4714]: I0129 16:11:46.183591 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:46 crc kubenswrapper[4714]: I0129 16:11:46.183630 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:46 crc kubenswrapper[4714]: E0129 16:11:46.183895 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:46 crc kubenswrapper[4714]: I0129 16:11:46.183918 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:46 crc kubenswrapper[4714]: E0129 16:11:46.184034 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:46 crc kubenswrapper[4714]: E0129 16:11:46.184150 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:47 crc kubenswrapper[4714]: I0129 16:11:47.183924 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:47 crc kubenswrapper[4714]: E0129 16:11:47.184228 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:48 crc kubenswrapper[4714]: I0129 16:11:48.183781 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:48 crc kubenswrapper[4714]: I0129 16:11:48.183899 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:48 crc kubenswrapper[4714]: I0129 16:11:48.183998 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:48 crc kubenswrapper[4714]: E0129 16:11:48.184023 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:48 crc kubenswrapper[4714]: E0129 16:11:48.184226 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:48 crc kubenswrapper[4714]: E0129 16:11:48.184268 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:49 crc kubenswrapper[4714]: I0129 16:11:49.184074 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:49 crc kubenswrapper[4714]: E0129 16:11:49.184239 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:50 crc kubenswrapper[4714]: I0129 16:11:50.190045 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:50 crc kubenswrapper[4714]: E0129 16:11:50.190178 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:50 crc kubenswrapper[4714]: I0129 16:11:50.190049 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:50 crc kubenswrapper[4714]: E0129 16:11:50.190394 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:50 crc kubenswrapper[4714]: I0129 16:11:50.190466 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:50 crc kubenswrapper[4714]: E0129 16:11:50.190526 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:50 crc kubenswrapper[4714]: I0129 16:11:50.838179 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b2ttm_89560008-8bdc-4640-af11-681d825e69d4/kube-multus/1.log" Jan 29 16:11:50 crc kubenswrapper[4714]: I0129 16:11:50.838771 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b2ttm_89560008-8bdc-4640-af11-681d825e69d4/kube-multus/0.log" Jan 29 16:11:50 crc kubenswrapper[4714]: I0129 16:11:50.838829 4714 generic.go:334] "Generic (PLEG): container finished" podID="89560008-8bdc-4640-af11-681d825e69d4" containerID="c747ed61a18e27d63630395860ce896242426b1ce46ea5f9d00534b808804a58" exitCode=1 Jan 29 16:11:50 crc kubenswrapper[4714]: I0129 16:11:50.838870 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b2ttm" event={"ID":"89560008-8bdc-4640-af11-681d825e69d4","Type":"ContainerDied","Data":"c747ed61a18e27d63630395860ce896242426b1ce46ea5f9d00534b808804a58"} Jan 29 16:11:50 crc kubenswrapper[4714]: I0129 16:11:50.838909 4714 scope.go:117] "RemoveContainer" containerID="a63101a231660b105a82b67269b53217cac5e28d81c8a9e123d259779a76b84a" Jan 29 16:11:50 crc kubenswrapper[4714]: I0129 16:11:50.839458 4714 scope.go:117] "RemoveContainer" containerID="c747ed61a18e27d63630395860ce896242426b1ce46ea5f9d00534b808804a58" Jan 29 16:11:50 crc kubenswrapper[4714]: E0129 16:11:50.839645 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-b2ttm_openshift-multus(89560008-8bdc-4640-af11-681d825e69d4)\"" pod="openshift-multus/multus-b2ttm" podUID="89560008-8bdc-4640-af11-681d825e69d4" Jan 29 16:11:51 crc kubenswrapper[4714]: I0129 16:11:51.183387 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:51 crc kubenswrapper[4714]: E0129 16:11:51.183736 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:51 crc kubenswrapper[4714]: I0129 16:11:51.844020 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b2ttm_89560008-8bdc-4640-af11-681d825e69d4/kube-multus/1.log" Jan 29 16:11:52 crc kubenswrapper[4714]: I0129 16:11:52.184082 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:52 crc kubenswrapper[4714]: I0129 16:11:52.184144 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:52 crc kubenswrapper[4714]: I0129 16:11:52.184112 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:52 crc kubenswrapper[4714]: E0129 16:11:52.184243 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:52 crc kubenswrapper[4714]: E0129 16:11:52.184430 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:52 crc kubenswrapper[4714]: E0129 16:11:52.184520 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:53 crc kubenswrapper[4714]: I0129 16:11:53.183150 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:53 crc kubenswrapper[4714]: E0129 16:11:53.183292 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:54 crc kubenswrapper[4714]: I0129 16:11:54.184240 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:54 crc kubenswrapper[4714]: I0129 16:11:54.184239 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:54 crc kubenswrapper[4714]: I0129 16:11:54.184351 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:54 crc kubenswrapper[4714]: E0129 16:11:54.192438 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:54 crc kubenswrapper[4714]: E0129 16:11:54.193170 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:54 crc kubenswrapper[4714]: E0129 16:11:54.194040 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:54 crc kubenswrapper[4714]: E0129 16:11:54.207085 4714 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 29 16:11:54 crc kubenswrapper[4714]: E0129 16:11:54.280364 4714 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 16:11:55 crc kubenswrapper[4714]: I0129 16:11:55.183246 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:55 crc kubenswrapper[4714]: E0129 16:11:55.183505 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:56 crc kubenswrapper[4714]: I0129 16:11:56.183388 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:56 crc kubenswrapper[4714]: I0129 16:11:56.183453 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:56 crc kubenswrapper[4714]: E0129 16:11:56.183560 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:56 crc kubenswrapper[4714]: I0129 16:11:56.183598 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:56 crc kubenswrapper[4714]: E0129 16:11:56.183697 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:56 crc kubenswrapper[4714]: E0129 16:11:56.183763 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:57 crc kubenswrapper[4714]: I0129 16:11:57.183665 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:57 crc kubenswrapper[4714]: E0129 16:11:57.184012 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:57 crc kubenswrapper[4714]: I0129 16:11:57.185153 4714 scope.go:117] "RemoveContainer" containerID="95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365" Jan 29 16:11:57 crc kubenswrapper[4714]: I0129 16:11:57.868317 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sbnkt_04b20f02-6c1e-4082-8233-8f06bda63195/ovnkube-controller/3.log" Jan 29 16:11:57 crc kubenswrapper[4714]: I0129 16:11:57.871662 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" event={"ID":"04b20f02-6c1e-4082-8233-8f06bda63195","Type":"ContainerStarted","Data":"00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6"} Jan 29 16:11:57 crc kubenswrapper[4714]: I0129 16:11:57.872153 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:11:57 crc kubenswrapper[4714]: I0129 16:11:57.925120 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" podStartSLOduration=103.925093306 podStartE2EDuration="1m43.925093306s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:11:57.922261564 +0000 UTC m=+124.442762724" watchObservedRunningTime="2026-01-29 16:11:57.925093306 +0000 UTC m=+124.445594466" Jan 29 16:11:58 crc kubenswrapper[4714]: I0129 16:11:58.183656 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:58 crc kubenswrapper[4714]: I0129 16:11:58.183710 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:11:58 crc kubenswrapper[4714]: I0129 16:11:58.183801 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:11:58 crc kubenswrapper[4714]: E0129 16:11:58.184026 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:58 crc kubenswrapper[4714]: E0129 16:11:58.184165 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:11:58 crc kubenswrapper[4714]: E0129 16:11:58.184269 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:11:58 crc kubenswrapper[4714]: I0129 16:11:58.207412 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2w92b"] Jan 29 16:11:58 crc kubenswrapper[4714]: I0129 16:11:58.876452 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:11:58 crc kubenswrapper[4714]: E0129 16:11:58.877138 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:11:59 crc kubenswrapper[4714]: I0129 16:11:59.183247 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:11:59 crc kubenswrapper[4714]: E0129 16:11:59.183420 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:11:59 crc kubenswrapper[4714]: E0129 16:11:59.281795 4714 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 16:12:00 crc kubenswrapper[4714]: I0129 16:12:00.184131 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:00 crc kubenswrapper[4714]: I0129 16:12:00.184251 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:12:00 crc kubenswrapper[4714]: E0129 16:12:00.184309 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:12:00 crc kubenswrapper[4714]: E0129 16:12:00.184471 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:12:00 crc kubenswrapper[4714]: I0129 16:12:00.184736 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:00 crc kubenswrapper[4714]: E0129 16:12:00.184848 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:12:01 crc kubenswrapper[4714]: I0129 16:12:01.184076 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:01 crc kubenswrapper[4714]: E0129 16:12:01.184325 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:12:02 crc kubenswrapper[4714]: I0129 16:12:02.183576 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:02 crc kubenswrapper[4714]: I0129 16:12:02.183634 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:02 crc kubenswrapper[4714]: E0129 16:12:02.183842 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:12:02 crc kubenswrapper[4714]: I0129 16:12:02.183909 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:12:02 crc kubenswrapper[4714]: E0129 16:12:02.184166 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:12:02 crc kubenswrapper[4714]: E0129 16:12:02.184352 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:12:02 crc kubenswrapper[4714]: I0129 16:12:02.277352 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:12:03 crc kubenswrapper[4714]: I0129 16:12:03.183309 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:03 crc kubenswrapper[4714]: E0129 16:12:03.183494 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:12:04 crc kubenswrapper[4714]: I0129 16:12:04.184302 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:04 crc kubenswrapper[4714]: I0129 16:12:04.184431 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:12:04 crc kubenswrapper[4714]: E0129 16:12:04.184571 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:12:04 crc kubenswrapper[4714]: I0129 16:12:04.184623 4714 scope.go:117] "RemoveContainer" containerID="c747ed61a18e27d63630395860ce896242426b1ce46ea5f9d00534b808804a58" Jan 29 16:12:04 crc kubenswrapper[4714]: E0129 16:12:04.184678 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:12:04 crc kubenswrapper[4714]: I0129 16:12:04.185299 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:04 crc kubenswrapper[4714]: E0129 16:12:04.185466 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:12:04 crc kubenswrapper[4714]: E0129 16:12:04.283147 4714 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 16:12:04 crc kubenswrapper[4714]: I0129 16:12:04.901823 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b2ttm_89560008-8bdc-4640-af11-681d825e69d4/kube-multus/1.log" Jan 29 16:12:04 crc kubenswrapper[4714]: I0129 16:12:04.901913 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b2ttm" event={"ID":"89560008-8bdc-4640-af11-681d825e69d4","Type":"ContainerStarted","Data":"e21aab3b653d9b1f38d58e9c32cbfb8988660ecb96eec4099a6536e09747d8fb"} Jan 29 16:12:05 crc kubenswrapper[4714]: I0129 16:12:05.183614 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:05 crc kubenswrapper[4714]: E0129 16:12:05.183780 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:12:06 crc kubenswrapper[4714]: I0129 16:12:06.183646 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:06 crc kubenswrapper[4714]: E0129 16:12:06.183775 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:12:06 crc kubenswrapper[4714]: I0129 16:12:06.183839 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:06 crc kubenswrapper[4714]: I0129 16:12:06.183670 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:12:06 crc kubenswrapper[4714]: E0129 16:12:06.184057 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:12:06 crc kubenswrapper[4714]: E0129 16:12:06.183963 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:12:07 crc kubenswrapper[4714]: I0129 16:12:07.183250 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:07 crc kubenswrapper[4714]: E0129 16:12:07.183500 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:12:08 crc kubenswrapper[4714]: I0129 16:12:08.183659 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:08 crc kubenswrapper[4714]: I0129 16:12:08.183714 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:12:08 crc kubenswrapper[4714]: I0129 16:12:08.185310 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:08 crc kubenswrapper[4714]: E0129 16:12:08.185653 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:12:08 crc kubenswrapper[4714]: E0129 16:12:08.185756 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2w92b" podUID="791456e8-8d95-4cdb-8fd1-d06a7586b328" Jan 29 16:12:08 crc kubenswrapper[4714]: E0129 16:12:08.186049 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:12:09 crc kubenswrapper[4714]: I0129 16:12:09.184234 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:09 crc kubenswrapper[4714]: E0129 16:12:09.184405 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:12:10 crc kubenswrapper[4714]: I0129 16:12:10.183430 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:12:10 crc kubenswrapper[4714]: I0129 16:12:10.183491 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:10 crc kubenswrapper[4714]: I0129 16:12:10.184140 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:10 crc kubenswrapper[4714]: I0129 16:12:10.186294 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 16:12:10 crc kubenswrapper[4714]: I0129 16:12:10.188990 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 16:12:10 crc kubenswrapper[4714]: I0129 16:12:10.189220 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 16:12:10 crc kubenswrapper[4714]: I0129 16:12:10.189773 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 16:12:11 crc kubenswrapper[4714]: I0129 16:12:11.183808 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:11 crc kubenswrapper[4714]: I0129 16:12:11.185630 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 16:12:11 crc kubenswrapper[4714]: I0129 16:12:11.187033 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.630288 4714 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.685488 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-xvrxj"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.686328 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xvrxj" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.687549 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jb6jw"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.688497 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jb6jw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.689565 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.689700 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.689733 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.690083 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.690148 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.693893 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.693911 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.694079 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.694075 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-kvp9d"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.695250 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-kvp9d" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.696785 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xlczd"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.697818 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.699272 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcj84"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.699988 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcj84" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.702571 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.703760 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.704193 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6jl75"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.711684 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.712024 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.726284 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4dn69"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.726703 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-m2g9h"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.727055 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-z4h55"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.727595 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.727704 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.727664 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-z4h55" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.728194 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4dn69" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.728564 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-m2g9h" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.728587 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.728782 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.728920 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.728960 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.729082 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.729301 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.729604 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.729722 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.729800 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.729822 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.729843 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dwsm5"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.730024 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.730038 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.729723 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.730183 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.730408 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.730478 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.730416 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.730542 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dwsm5" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.730488 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.730700 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99knh"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.730614 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.730648 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.731628 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99knh" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.732116 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.732224 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.732410 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.732453 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.732489 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.732576 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.732617 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-fn75b"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.733162 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fn75b" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.732624 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.733611 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.732650 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.732660 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.732687 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.738980 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h8b4r"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.739684 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nh2m9"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.739987 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cp5md"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.740146 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.740361 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nh2m9" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.740468 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cp5md" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.741081 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.741089 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.741674 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.741869 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.742061 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.742169 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.742258 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.742347 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.742439 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.742518 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.742603 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.742684 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.742796 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.742859 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.743254 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.748765 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.749059 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gnjmm"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.751752 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.760505 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.760782 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.761159 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-lz6mw"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.762114 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nf7jb"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.763244 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.763373 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9thpj"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.764416 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.764954 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.765209 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.766038 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.766376 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.767316 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.767738 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.768263 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-sv7xw"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.768434 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nf7jb" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.769108 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-lz6mw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.769356 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mrprd"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.770096 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-r6cxt"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.768280 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.768877 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.769269 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.769563 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.774260 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.774531 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.774636 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.774820 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.775134 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.775307 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.775966 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.795291 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.795533 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.795613 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.795885 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9thpj" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.796081 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.796251 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mrprd" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.796367 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.796477 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-sv7xw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.797111 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r6cxt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.797453 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2lpzx"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.797761 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.797921 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2lpzx" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.797963 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.798388 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.799095 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.799172 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.799258 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.801344 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.801637 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.801773 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.801917 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.802077 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.802423 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.802464 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.802496 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnhkm\" (UniqueName: \"kubernetes.io/projected/c779f8ba-7614-49f1-be6d-a9e316ec59ba-kube-api-access-rnhkm\") pod \"apiserver-7bbb656c7d-kgl5s\" (UID: \"c779f8ba-7614-49f1-be6d-a9e316ec59ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.802522 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b5da98c-0704-41c7-8563-707f7af93f41-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-99knh\" (UID: \"5b5da98c-0704-41c7-8563-707f7af93f41\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99knh" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.802546 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f3e2f962-69e3-4008-a45f-5c35677f7f36-machine-approver-tls\") pod \"machine-approver-56656f9798-xvrxj\" (UID: \"f3e2f962-69e3-4008-a45f-5c35677f7f36\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xvrxj" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.802571 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92-config\") pod \"machine-api-operator-5694c8668f-z4h55\" (UID: \"bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z4h55" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.802596 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99bab267-639b-48b1-abc4-8c0373200a39-config\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.802618 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.802641 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92-images\") pod \"machine-api-operator-5694c8668f-z4h55\" (UID: \"bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z4h55" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.802666 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad3c7510-ccc3-453a-91ae-b1f2cf88d2ec-serving-cert\") pod \"console-operator-58897d9998-kvp9d\" (UID: \"ad3c7510-ccc3-453a-91ae-b1f2cf88d2ec\") " pod="openshift-console-operator/console-operator-58897d9998-kvp9d" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.802688 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0387af3d-8796-46b0-9282-9ecbda7fe3a7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cp5md\" (UID: \"0387af3d-8796-46b0-9282-9ecbda7fe3a7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cp5md" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.802721 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b25d77ec-57de-4c2a-b534-e98bf149b92a-serving-cert\") pod \"authentication-operator-69f744f599-jb6jw\" (UID: \"b25d77ec-57de-4c2a-b534-e98bf149b92a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb6jw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.802743 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/99bab267-639b-48b1-abc4-8c0373200a39-audit-dir\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.802750 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.802768 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad3c7510-ccc3-453a-91ae-b1f2cf88d2ec-config\") pod \"console-operator-58897d9998-kvp9d\" (UID: \"ad3c7510-ccc3-453a-91ae-b1f2cf88d2ec\") " pod="openshift-console-operator/console-operator-58897d9998-kvp9d" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.802789 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hv4x\" (UniqueName: \"kubernetes.io/projected/f3e2f962-69e3-4008-a45f-5c35677f7f36-kube-api-access-5hv4x\") pod \"machine-approver-56656f9798-xvrxj\" (UID: \"f3e2f962-69e3-4008-a45f-5c35677f7f36\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xvrxj" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.802812 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97cfbecd-36ef-409b-94e9-f607a1fa2c42-config\") pod \"etcd-operator-b45778765-nh2m9\" (UID: \"97cfbecd-36ef-409b-94e9-f607a1fa2c42\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nh2m9" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.802834 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6de35940-bef4-4dfa-9a83-08ba29d73399-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4dn69\" (UID: \"6de35940-bef4-4dfa-9a83-08ba29d73399\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4dn69" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.802872 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2rbv\" (UniqueName: \"kubernetes.io/projected/99bab267-639b-48b1-abc4-8c0373200a39-kube-api-access-f2rbv\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.802897 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0e2a789d-6a90-4d60-881e-9562cd92e0a7-console-config\") pod \"console-f9d7485db-m2g9h\" (UID: \"0e2a789d-6a90-4d60-881e-9562cd92e0a7\") " pod="openshift-console/console-f9d7485db-m2g9h" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.802925 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3e2f962-69e3-4008-a45f-5c35677f7f36-auth-proxy-config\") pod \"machine-approver-56656f9798-xvrxj\" (UID: \"f3e2f962-69e3-4008-a45f-5c35677f7f36\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xvrxj" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803156 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b25d77ec-57de-4c2a-b534-e98bf149b92a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jb6jw\" (UID: \"b25d77ec-57de-4c2a-b534-e98bf149b92a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb6jw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803196 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-z4h55\" (UID: \"bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z4h55" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803241 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/832097a5-4691-42b6-99cc-38679071d5ee-audit-dir\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803265 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803288 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knzbq\" (UniqueName: \"kubernetes.io/projected/8f71ba3e-c687-4ff7-9475-1e18ded764f6-kube-api-access-knzbq\") pod \"openshift-config-operator-7777fb866f-dwsm5\" (UID: \"8f71ba3e-c687-4ff7-9475-1e18ded764f6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dwsm5" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803310 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c779f8ba-7614-49f1-be6d-a9e316ec59ba-serving-cert\") pod \"apiserver-7bbb656c7d-kgl5s\" (UID: \"c779f8ba-7614-49f1-be6d-a9e316ec59ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803334 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c2d0611-58f8-4a7e-8280-361c80d62802-config\") pod \"controller-manager-879f6c89f-xlczd\" (UID: \"3c2d0611-58f8-4a7e-8280-361c80d62802\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803361 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srhd7\" (UniqueName: \"kubernetes.io/projected/fbfdd647-1d64-4d35-9af2-6dee52b4c860-kube-api-access-srhd7\") pod \"route-controller-manager-6576b87f9c-m2qxw\" (UID: \"fbfdd647-1d64-4d35-9af2-6dee52b4c860\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803386 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99cl5\" (UniqueName: \"kubernetes.io/projected/b25d77ec-57de-4c2a-b534-e98bf149b92a-kube-api-access-99cl5\") pod \"authentication-operator-69f744f599-jb6jw\" (UID: \"b25d77ec-57de-4c2a-b534-e98bf149b92a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb6jw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803411 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c779f8ba-7614-49f1-be6d-a9e316ec59ba-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kgl5s\" (UID: \"c779f8ba-7614-49f1-be6d-a9e316ec59ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803446 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/99bab267-639b-48b1-abc4-8c0373200a39-etcd-client\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803467 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/832097a5-4691-42b6-99cc-38679071d5ee-audit-policies\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803488 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803510 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3e2f962-69e3-4008-a45f-5c35677f7f36-config\") pod \"machine-approver-56656f9798-xvrxj\" (UID: \"f3e2f962-69e3-4008-a45f-5c35677f7f36\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xvrxj" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803532 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b25d77ec-57de-4c2a-b534-e98bf149b92a-service-ca-bundle\") pod \"authentication-operator-69f744f599-jb6jw\" (UID: \"b25d77ec-57de-4c2a-b534-e98bf149b92a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb6jw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803554 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb7zc\" (UniqueName: \"kubernetes.io/projected/eacb9f84-018a-4f64-b211-c9bedce50b9e-kube-api-access-sb7zc\") pod \"openshift-apiserver-operator-796bbdcf4f-vcj84\" (UID: \"eacb9f84-018a-4f64-b211-c9bedce50b9e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcj84" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803573 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/99bab267-639b-48b1-abc4-8c0373200a39-encryption-config\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803595 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803621 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbfdd647-1d64-4d35-9af2-6dee52b4c860-config\") pod \"route-controller-manager-6576b87f9c-m2qxw\" (UID: \"fbfdd647-1d64-4d35-9af2-6dee52b4c860\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803641 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b25d77ec-57de-4c2a-b534-e98bf149b92a-config\") pod \"authentication-operator-69f744f599-jb6jw\" (UID: \"b25d77ec-57de-4c2a-b534-e98bf149b92a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb6jw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803662 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c2d0611-58f8-4a7e-8280-361c80d62802-client-ca\") pod \"controller-manager-879f6c89f-xlczd\" (UID: \"3c2d0611-58f8-4a7e-8280-361c80d62802\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803688 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6de35940-bef4-4dfa-9a83-08ba29d73399-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4dn69\" (UID: \"6de35940-bef4-4dfa-9a83-08ba29d73399\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4dn69" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803711 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/99bab267-639b-48b1-abc4-8c0373200a39-image-import-ca\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803733 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8nmp\" (UniqueName: \"kubernetes.io/projected/832097a5-4691-42b6-99cc-38679071d5ee-kube-api-access-l8nmp\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803758 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/99bab267-639b-48b1-abc4-8c0373200a39-node-pullsecrets\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803779 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803802 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eacb9f84-018a-4f64-b211-c9bedce50b9e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vcj84\" (UID: \"eacb9f84-018a-4f64-b211-c9bedce50b9e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcj84" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803822 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99bab267-639b-48b1-abc4-8c0373200a39-serving-cert\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803841 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fbfdd647-1d64-4d35-9af2-6dee52b4c860-client-ca\") pod \"route-controller-manager-6576b87f9c-m2qxw\" (UID: \"fbfdd647-1d64-4d35-9af2-6dee52b4c860\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803859 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0e2a789d-6a90-4d60-881e-9562cd92e0a7-service-ca\") pod \"console-f9d7485db-m2g9h\" (UID: \"0e2a789d-6a90-4d60-881e-9562cd92e0a7\") " pod="openshift-console/console-f9d7485db-m2g9h" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803879 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e2a789d-6a90-4d60-881e-9562cd92e0a7-console-serving-cert\") pod \"console-f9d7485db-m2g9h\" (UID: \"0e2a789d-6a90-4d60-881e-9562cd92e0a7\") " pod="openshift-console/console-f9d7485db-m2g9h" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803900 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8tm6\" (UniqueName: \"kubernetes.io/projected/bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92-kube-api-access-k8tm6\") pod \"machine-api-operator-5694c8668f-z4h55\" (UID: \"bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z4h55" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.803919 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6de35940-bef4-4dfa-9a83-08ba29d73399-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4dn69\" (UID: \"6de35940-bef4-4dfa-9a83-08ba29d73399\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4dn69" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.815387 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.816526 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8f71ba3e-c687-4ff7-9475-1e18ded764f6-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dwsm5\" (UID: \"8f71ba3e-c687-4ff7-9475-1e18ded764f6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dwsm5" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.816565 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmd5b\" (UniqueName: \"kubernetes.io/projected/5b5da98c-0704-41c7-8563-707f7af93f41-kube-api-access-hmd5b\") pod \"cluster-samples-operator-665b6dd947-99knh\" (UID: \"5b5da98c-0704-41c7-8563-707f7af93f41\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99knh" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.816595 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad3c7510-ccc3-453a-91ae-b1f2cf88d2ec-trusted-ca\") pod \"console-operator-58897d9998-kvp9d\" (UID: \"ad3c7510-ccc3-453a-91ae-b1f2cf88d2ec\") " pod="openshift-console-operator/console-operator-58897d9998-kvp9d" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.816624 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c779f8ba-7614-49f1-be6d-a9e316ec59ba-audit-policies\") pod \"apiserver-7bbb656c7d-kgl5s\" (UID: \"c779f8ba-7614-49f1-be6d-a9e316ec59ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.816649 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.816678 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0e2a789d-6a90-4d60-881e-9562cd92e0a7-oauth-serving-cert\") pod \"console-f9d7485db-m2g9h\" (UID: \"0e2a789d-6a90-4d60-881e-9562cd92e0a7\") " pod="openshift-console/console-f9d7485db-m2g9h" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.816700 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f71ba3e-c687-4ff7-9475-1e18ded764f6-serving-cert\") pod \"openshift-config-operator-7777fb866f-dwsm5\" (UID: \"8f71ba3e-c687-4ff7-9475-1e18ded764f6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dwsm5" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.816721 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c779f8ba-7614-49f1-be6d-a9e316ec59ba-etcd-client\") pod \"apiserver-7bbb656c7d-kgl5s\" (UID: \"c779f8ba-7614-49f1-be6d-a9e316ec59ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.816748 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x5xk\" (UniqueName: \"kubernetes.io/projected/ad3c7510-ccc3-453a-91ae-b1f2cf88d2ec-kube-api-access-4x5xk\") pod \"console-operator-58897d9998-kvp9d\" (UID: \"ad3c7510-ccc3-453a-91ae-b1f2cf88d2ec\") " pod="openshift-console-operator/console-operator-58897d9998-kvp9d" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.816772 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/97cfbecd-36ef-409b-94e9-f607a1fa2c42-etcd-service-ca\") pod \"etcd-operator-b45778765-nh2m9\" (UID: \"97cfbecd-36ef-409b-94e9-f607a1fa2c42\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nh2m9" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.816793 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0387af3d-8796-46b0-9282-9ecbda7fe3a7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cp5md\" (UID: \"0387af3d-8796-46b0-9282-9ecbda7fe3a7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cp5md" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.816818 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/97cfbecd-36ef-409b-94e9-f607a1fa2c42-etcd-ca\") pod \"etcd-operator-b45778765-nh2m9\" (UID: \"97cfbecd-36ef-409b-94e9-f607a1fa2c42\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nh2m9" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.816842 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c779f8ba-7614-49f1-be6d-a9e316ec59ba-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kgl5s\" (UID: \"c779f8ba-7614-49f1-be6d-a9e316ec59ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.816865 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/99bab267-639b-48b1-abc4-8c0373200a39-etcd-serving-ca\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.816886 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk8rh\" (UniqueName: \"kubernetes.io/projected/3c2d0611-58f8-4a7e-8280-361c80d62802-kube-api-access-pk8rh\") pod \"controller-manager-879f6c89f-xlczd\" (UID: \"3c2d0611-58f8-4a7e-8280-361c80d62802\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.816908 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c779f8ba-7614-49f1-be6d-a9e316ec59ba-audit-dir\") pod \"apiserver-7bbb656c7d-kgl5s\" (UID: \"c779f8ba-7614-49f1-be6d-a9e316ec59ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.816949 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0e2a789d-6a90-4d60-881e-9562cd92e0a7-console-oauth-config\") pod \"console-f9d7485db-m2g9h\" (UID: \"0e2a789d-6a90-4d60-881e-9562cd92e0a7\") " pod="openshift-console/console-f9d7485db-m2g9h" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.816971 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c779f8ba-7614-49f1-be6d-a9e316ec59ba-encryption-config\") pod \"apiserver-7bbb656c7d-kgl5s\" (UID: \"c779f8ba-7614-49f1-be6d-a9e316ec59ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.816993 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8qsw\" (UniqueName: \"kubernetes.io/projected/6de35940-bef4-4dfa-9a83-08ba29d73399-kube-api-access-j8qsw\") pod \"cluster-image-registry-operator-dc59b4c8b-4dn69\" (UID: \"6de35940-bef4-4dfa-9a83-08ba29d73399\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4dn69" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.817014 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.817037 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97cfbecd-36ef-409b-94e9-f607a1fa2c42-serving-cert\") pod \"etcd-operator-b45778765-nh2m9\" (UID: \"97cfbecd-36ef-409b-94e9-f607a1fa2c42\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nh2m9" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.817063 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbfdd647-1d64-4d35-9af2-6dee52b4c860-serving-cert\") pod \"route-controller-manager-6576b87f9c-m2qxw\" (UID: \"fbfdd647-1d64-4d35-9af2-6dee52b4c860\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.817085 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e2a789d-6a90-4d60-881e-9562cd92e0a7-trusted-ca-bundle\") pod \"console-f9d7485db-m2g9h\" (UID: \"0e2a789d-6a90-4d60-881e-9562cd92e0a7\") " pod="openshift-console/console-f9d7485db-m2g9h" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.817110 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fbcn\" (UniqueName: \"kubernetes.io/projected/42b66dc3-a385-4350-a943-50f062da35f7-kube-api-access-2fbcn\") pod \"downloads-7954f5f757-fn75b\" (UID: \"42b66dc3-a385-4350-a943-50f062da35f7\") " pod="openshift-console/downloads-7954f5f757-fn75b" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.817132 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99bab267-639b-48b1-abc4-8c0373200a39-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.817153 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.817175 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.817212 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/97cfbecd-36ef-409b-94e9-f607a1fa2c42-etcd-client\") pod \"etcd-operator-b45778765-nh2m9\" (UID: \"97cfbecd-36ef-409b-94e9-f607a1fa2c42\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nh2m9" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.817233 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c2d0611-58f8-4a7e-8280-361c80d62802-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xlczd\" (UID: \"3c2d0611-58f8-4a7e-8280-361c80d62802\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.817256 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0387af3d-8796-46b0-9282-9ecbda7fe3a7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cp5md\" (UID: \"0387af3d-8796-46b0-9282-9ecbda7fe3a7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cp5md" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.817281 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn272\" (UniqueName: \"kubernetes.io/projected/0e2a789d-6a90-4d60-881e-9562cd92e0a7-kube-api-access-bn272\") pod \"console-f9d7485db-m2g9h\" (UID: \"0e2a789d-6a90-4d60-881e-9562cd92e0a7\") " pod="openshift-console/console-f9d7485db-m2g9h" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.817306 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c2d0611-58f8-4a7e-8280-361c80d62802-serving-cert\") pod \"controller-manager-879f6c89f-xlczd\" (UID: \"3c2d0611-58f8-4a7e-8280-361c80d62802\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.817328 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eacb9f84-018a-4f64-b211-c9bedce50b9e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vcj84\" (UID: \"eacb9f84-018a-4f64-b211-c9bedce50b9e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcj84" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.817351 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjjl8\" (UniqueName: \"kubernetes.io/projected/97cfbecd-36ef-409b-94e9-f607a1fa2c42-kube-api-access-rjjl8\") pod \"etcd-operator-b45778765-nh2m9\" (UID: \"97cfbecd-36ef-409b-94e9-f607a1fa2c42\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nh2m9" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.817372 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/99bab267-639b-48b1-abc4-8c0373200a39-audit\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.817262 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.827031 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.827574 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.828378 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.828595 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.828836 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.842814 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l2t56"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.843613 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-l2t56" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.844062 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ch6wr"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.844087 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.844553 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ch6wr" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.844690 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xtzbx"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.845314 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xtzbx" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.846199 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzg2m"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.846749 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzg2m" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.850194 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.850508 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.851838 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-chzp2"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.852421 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-chzp2" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.853339 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.868392 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-44gfk"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.869641 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zkbcz"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.870813 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sq9mx"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.871704 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sq9mx" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.872055 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-44gfk" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.872303 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zkbcz" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.874595 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.874868 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v68nn"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.876111 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v68nn" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.878987 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495040-5mkf8"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.879893 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbrmk"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.880680 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5mkf8" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.881144 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbrmk" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.881182 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ljnh7"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.882429 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ljnh7" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.882696 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-jcdhl"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.885130 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jcdhl" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.891022 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4dn69"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.891080 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jb6jw"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.892757 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.893099 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kfqcf"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.903751 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xlczd"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.903871 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-kfqcf" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.906969 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.909164 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-m2g9h"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.909978 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.912035 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fn75b"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.915838 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ch6wr"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.919010 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x5xk\" (UniqueName: \"kubernetes.io/projected/ad3c7510-ccc3-453a-91ae-b1f2cf88d2ec-kube-api-access-4x5xk\") pod \"console-operator-58897d9998-kvp9d\" (UID: \"ad3c7510-ccc3-453a-91ae-b1f2cf88d2ec\") " pod="openshift-console-operator/console-operator-58897d9998-kvp9d" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.919121 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/97cfbecd-36ef-409b-94e9-f607a1fa2c42-etcd-service-ca\") pod \"etcd-operator-b45778765-nh2m9\" (UID: \"97cfbecd-36ef-409b-94e9-f607a1fa2c42\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nh2m9" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.919275 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0387af3d-8796-46b0-9282-9ecbda7fe3a7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cp5md\" (UID: \"0387af3d-8796-46b0-9282-9ecbda7fe3a7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cp5md" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.919367 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45cdw\" (UniqueName: \"kubernetes.io/projected/2eab9b06-06e4-4f58-ab86-ab1bf3b5cc96-kube-api-access-45cdw\") pod \"machine-config-controller-84d6567774-nf7jb\" (UID: \"2eab9b06-06e4-4f58-ab86-ab1bf3b5cc96\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nf7jb" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.919483 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/97cfbecd-36ef-409b-94e9-f607a1fa2c42-etcd-ca\") pod \"etcd-operator-b45778765-nh2m9\" (UID: \"97cfbecd-36ef-409b-94e9-f607a1fa2c42\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nh2m9" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.919566 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c779f8ba-7614-49f1-be6d-a9e316ec59ba-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kgl5s\" (UID: \"c779f8ba-7614-49f1-be6d-a9e316ec59ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.919636 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/99bab267-639b-48b1-abc4-8c0373200a39-etcd-serving-ca\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.919716 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk8rh\" (UniqueName: \"kubernetes.io/projected/3c2d0611-58f8-4a7e-8280-361c80d62802-kube-api-access-pk8rh\") pod \"controller-manager-879f6c89f-xlczd\" (UID: \"3c2d0611-58f8-4a7e-8280-361c80d62802\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.919785 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0e2a789d-6a90-4d60-881e-9562cd92e0a7-console-oauth-config\") pod \"console-f9d7485db-m2g9h\" (UID: \"0e2a789d-6a90-4d60-881e-9562cd92e0a7\") " pod="openshift-console/console-f9d7485db-m2g9h" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.919848 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c779f8ba-7614-49f1-be6d-a9e316ec59ba-encryption-config\") pod \"apiserver-7bbb656c7d-kgl5s\" (UID: \"c779f8ba-7614-49f1-be6d-a9e316ec59ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.919914 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c779f8ba-7614-49f1-be6d-a9e316ec59ba-audit-dir\") pod \"apiserver-7bbb656c7d-kgl5s\" (UID: \"c779f8ba-7614-49f1-be6d-a9e316ec59ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.920004 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2eab9b06-06e4-4f58-ab86-ab1bf3b5cc96-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nf7jb\" (UID: \"2eab9b06-06e4-4f58-ab86-ab1bf3b5cc96\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nf7jb" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.920147 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwvc4\" (UniqueName: \"kubernetes.io/projected/554abf87-b1ba-45b1-8130-95b40da3b8bf-kube-api-access-zwvc4\") pod \"openshift-controller-manager-operator-756b6f6bc6-mrprd\" (UID: \"554abf87-b1ba-45b1-8130-95b40da3b8bf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mrprd" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.920221 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97cfbecd-36ef-409b-94e9-f607a1fa2c42-serving-cert\") pod \"etcd-operator-b45778765-nh2m9\" (UID: \"97cfbecd-36ef-409b-94e9-f607a1fa2c42\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nh2m9" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.920292 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8qsw\" (UniqueName: \"kubernetes.io/projected/6de35940-bef4-4dfa-9a83-08ba29d73399-kube-api-access-j8qsw\") pod \"cluster-image-registry-operator-dc59b4c8b-4dn69\" (UID: \"6de35940-bef4-4dfa-9a83-08ba29d73399\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4dn69" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.920390 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.920469 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d288ee23-1753-48f2-ab82-736defe5fe18-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9thpj\" (UID: \"d288ee23-1753-48f2-ab82-736defe5fe18\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9thpj" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.920545 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2eab9b06-06e4-4f58-ab86-ab1bf3b5cc96-proxy-tls\") pod \"machine-config-controller-84d6567774-nf7jb\" (UID: \"2eab9b06-06e4-4f58-ab86-ab1bf3b5cc96\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nf7jb" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.920609 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc8e2d06-1cc2-4ea7-8d87-340d28740e20-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-ch6wr\" (UID: \"fc8e2d06-1cc2-4ea7-8d87-340d28740e20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ch6wr" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.920676 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8e2d06-1cc2-4ea7-8d87-340d28740e20-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-ch6wr\" (UID: \"fc8e2d06-1cc2-4ea7-8d87-340d28740e20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ch6wr" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.920754 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbfdd647-1d64-4d35-9af2-6dee52b4c860-serving-cert\") pod \"route-controller-manager-6576b87f9c-m2qxw\" (UID: \"fbfdd647-1d64-4d35-9af2-6dee52b4c860\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.920837 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e2a789d-6a90-4d60-881e-9562cd92e0a7-trusted-ca-bundle\") pod \"console-f9d7485db-m2g9h\" (UID: \"0e2a789d-6a90-4d60-881e-9562cd92e0a7\") " pod="openshift-console/console-f9d7485db-m2g9h" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.920907 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fbcn\" (UniqueName: \"kubernetes.io/projected/42b66dc3-a385-4350-a943-50f062da35f7-kube-api-access-2fbcn\") pod \"downloads-7954f5f757-fn75b\" (UID: \"42b66dc3-a385-4350-a943-50f062da35f7\") " pod="openshift-console/downloads-7954f5f757-fn75b" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.920999 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99bab267-639b-48b1-abc4-8c0373200a39-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.921066 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.921144 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/97cfbecd-36ef-409b-94e9-f607a1fa2c42-etcd-client\") pod \"etcd-operator-b45778765-nh2m9\" (UID: \"97cfbecd-36ef-409b-94e9-f607a1fa2c42\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nh2m9" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.921214 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.921282 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c2d0611-58f8-4a7e-8280-361c80d62802-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xlczd\" (UID: \"3c2d0611-58f8-4a7e-8280-361c80d62802\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.921353 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0387af3d-8796-46b0-9282-9ecbda7fe3a7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cp5md\" (UID: \"0387af3d-8796-46b0-9282-9ecbda7fe3a7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cp5md" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.921422 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn272\" (UniqueName: \"kubernetes.io/projected/0e2a789d-6a90-4d60-881e-9562cd92e0a7-kube-api-access-bn272\") pod \"console-f9d7485db-m2g9h\" (UID: \"0e2a789d-6a90-4d60-881e-9562cd92e0a7\") " pod="openshift-console/console-f9d7485db-m2g9h" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.920996 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c779f8ba-7614-49f1-be6d-a9e316ec59ba-audit-dir\") pod \"apiserver-7bbb656c7d-kgl5s\" (UID: \"c779f8ba-7614-49f1-be6d-a9e316ec59ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.921534 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eacb9f84-018a-4f64-b211-c9bedce50b9e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vcj84\" (UID: \"eacb9f84-018a-4f64-b211-c9bedce50b9e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcj84" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.921737 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c2d0611-58f8-4a7e-8280-361c80d62802-serving-cert\") pod \"controller-manager-879f6c89f-xlczd\" (UID: \"3c2d0611-58f8-4a7e-8280-361c80d62802\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.921862 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjjl8\" (UniqueName: \"kubernetes.io/projected/97cfbecd-36ef-409b-94e9-f607a1fa2c42-kube-api-access-rjjl8\") pod \"etcd-operator-b45778765-nh2m9\" (UID: \"97cfbecd-36ef-409b-94e9-f607a1fa2c42\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nh2m9" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.922476 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c779f8ba-7614-49f1-be6d-a9e316ec59ba-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kgl5s\" (UID: \"c779f8ba-7614-49f1-be6d-a9e316ec59ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.922708 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/99bab267-639b-48b1-abc4-8c0373200a39-audit\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.922747 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.922771 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.922793 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c14fb55e-a42b-46c9-9521-6e8b60235166-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2lpzx\" (UID: \"c14fb55e-a42b-46c9-9521-6e8b60235166\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2lpzx" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.922812 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f3e2f962-69e3-4008-a45f-5c35677f7f36-machine-approver-tls\") pod \"machine-approver-56656f9798-xvrxj\" (UID: \"f3e2f962-69e3-4008-a45f-5c35677f7f36\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xvrxj" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.922830 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92-config\") pod \"machine-api-operator-5694c8668f-z4h55\" (UID: \"bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z4h55" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.922845 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnhkm\" (UniqueName: \"kubernetes.io/projected/c779f8ba-7614-49f1-be6d-a9e316ec59ba-kube-api-access-rnhkm\") pod \"apiserver-7bbb656c7d-kgl5s\" (UID: \"c779f8ba-7614-49f1-be6d-a9e316ec59ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.922862 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b5da98c-0704-41c7-8563-707f7af93f41-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-99knh\" (UID: \"5b5da98c-0704-41c7-8563-707f7af93f41\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99knh" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.922897 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/97cfbecd-36ef-409b-94e9-f607a1fa2c42-etcd-ca\") pod \"etcd-operator-b45778765-nh2m9\" (UID: \"97cfbecd-36ef-409b-94e9-f607a1fa2c42\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nh2m9" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.922948 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e2a789d-6a90-4d60-881e-9562cd92e0a7-trusted-ca-bundle\") pod \"console-f9d7485db-m2g9h\" (UID: \"0e2a789d-6a90-4d60-881e-9562cd92e0a7\") " pod="openshift-console/console-f9d7485db-m2g9h" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.923005 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvzb6\" (UniqueName: \"kubernetes.io/projected/d288ee23-1753-48f2-ab82-736defe5fe18-kube-api-access-tvzb6\") pod \"machine-config-operator-74547568cd-9thpj\" (UID: \"d288ee23-1753-48f2-ab82-736defe5fe18\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9thpj" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.923025 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-789j9\" (UniqueName: \"kubernetes.io/projected/fc8e2d06-1cc2-4ea7-8d87-340d28740e20-kube-api-access-789j9\") pod \"kube-storage-version-migrator-operator-b67b599dd-ch6wr\" (UID: \"fc8e2d06-1cc2-4ea7-8d87-340d28740e20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ch6wr" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.923047 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99bab267-639b-48b1-abc4-8c0373200a39-config\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.923065 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.923082 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92-images\") pod \"machine-api-operator-5694c8668f-z4h55\" (UID: \"bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z4h55" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.923100 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad3c7510-ccc3-453a-91ae-b1f2cf88d2ec-serving-cert\") pod \"console-operator-58897d9998-kvp9d\" (UID: \"ad3c7510-ccc3-453a-91ae-b1f2cf88d2ec\") " pod="openshift-console-operator/console-operator-58897d9998-kvp9d" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.923116 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0387af3d-8796-46b0-9282-9ecbda7fe3a7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cp5md\" (UID: \"0387af3d-8796-46b0-9282-9ecbda7fe3a7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cp5md" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.923133 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad3c7510-ccc3-453a-91ae-b1f2cf88d2ec-config\") pod \"console-operator-58897d9998-kvp9d\" (UID: \"ad3c7510-ccc3-453a-91ae-b1f2cf88d2ec\") " pod="openshift-console-operator/console-operator-58897d9998-kvp9d" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.923162 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hv4x\" (UniqueName: \"kubernetes.io/projected/f3e2f962-69e3-4008-a45f-5c35677f7f36-kube-api-access-5hv4x\") pod \"machine-approver-56656f9798-xvrxj\" (UID: \"f3e2f962-69e3-4008-a45f-5c35677f7f36\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xvrxj" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.923179 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b25d77ec-57de-4c2a-b534-e98bf149b92a-serving-cert\") pod \"authentication-operator-69f744f599-jb6jw\" (UID: \"b25d77ec-57de-4c2a-b534-e98bf149b92a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb6jw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.923199 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/99bab267-639b-48b1-abc4-8c0373200a39-audit-dir\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.923442 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/97cfbecd-36ef-409b-94e9-f607a1fa2c42-etcd-service-ca\") pod \"etcd-operator-b45778765-nh2m9\" (UID: \"97cfbecd-36ef-409b-94e9-f607a1fa2c42\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nh2m9" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.923592 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/99bab267-639b-48b1-abc4-8c0373200a39-etcd-serving-ca\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.924095 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.924288 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c2d0611-58f8-4a7e-8280-361c80d62802-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xlczd\" (UID: \"3c2d0611-58f8-4a7e-8280-361c80d62802\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.924483 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eacb9f84-018a-4f64-b211-c9bedce50b9e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vcj84\" (UID: \"eacb9f84-018a-4f64-b211-c9bedce50b9e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcj84" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.924599 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/99bab267-639b-48b1-abc4-8c0373200a39-audit\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.925696 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/99bab267-639b-48b1-abc4-8c0373200a39-audit-dir\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.925950 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2lpzx"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.926025 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97cfbecd-36ef-409b-94e9-f607a1fa2c42-config\") pod \"etcd-operator-b45778765-nh2m9\" (UID: \"97cfbecd-36ef-409b-94e9-f607a1fa2c42\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nh2m9" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927136 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad3c7510-ccc3-453a-91ae-b1f2cf88d2ec-config\") pod \"console-operator-58897d9998-kvp9d\" (UID: \"ad3c7510-ccc3-453a-91ae-b1f2cf88d2ec\") " pod="openshift-console-operator/console-operator-58897d9998-kvp9d" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927148 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99bab267-639b-48b1-abc4-8c0373200a39-config\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927252 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6de35940-bef4-4dfa-9a83-08ba29d73399-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4dn69\" (UID: \"6de35940-bef4-4dfa-9a83-08ba29d73399\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4dn69" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927293 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2rbv\" (UniqueName: \"kubernetes.io/projected/99bab267-639b-48b1-abc4-8c0373200a39-kube-api-access-f2rbv\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927315 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c14fb55e-a42b-46c9-9521-6e8b60235166-config\") pod \"kube-controller-manager-operator-78b949d7b-2lpzx\" (UID: \"c14fb55e-a42b-46c9-9521-6e8b60235166\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2lpzx" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927338 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0e2a789d-6a90-4d60-881e-9562cd92e0a7-console-config\") pod \"console-f9d7485db-m2g9h\" (UID: \"0e2a789d-6a90-4d60-881e-9562cd92e0a7\") " pod="openshift-console/console-f9d7485db-m2g9h" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927375 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3e2f962-69e3-4008-a45f-5c35677f7f36-auth-proxy-config\") pod \"machine-approver-56656f9798-xvrxj\" (UID: \"f3e2f962-69e3-4008-a45f-5c35677f7f36\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xvrxj" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927393 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b25d77ec-57de-4c2a-b534-e98bf149b92a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jb6jw\" (UID: \"b25d77ec-57de-4c2a-b534-e98bf149b92a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb6jw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927408 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-z4h55\" (UID: \"bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z4h55" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927423 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knzbq\" (UniqueName: \"kubernetes.io/projected/8f71ba3e-c687-4ff7-9475-1e18ded764f6-kube-api-access-knzbq\") pod \"openshift-config-operator-7777fb866f-dwsm5\" (UID: \"8f71ba3e-c687-4ff7-9475-1e18ded764f6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dwsm5" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927438 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c779f8ba-7614-49f1-be6d-a9e316ec59ba-serving-cert\") pod \"apiserver-7bbb656c7d-kgl5s\" (UID: \"c779f8ba-7614-49f1-be6d-a9e316ec59ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927452 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c2d0611-58f8-4a7e-8280-361c80d62802-config\") pod \"controller-manager-879f6c89f-xlczd\" (UID: \"3c2d0611-58f8-4a7e-8280-361c80d62802\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927467 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/832097a5-4691-42b6-99cc-38679071d5ee-audit-dir\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927483 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927508 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srhd7\" (UniqueName: \"kubernetes.io/projected/fbfdd647-1d64-4d35-9af2-6dee52b4c860-kube-api-access-srhd7\") pod \"route-controller-manager-6576b87f9c-m2qxw\" (UID: \"fbfdd647-1d64-4d35-9af2-6dee52b4c860\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927125 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927522 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99cl5\" (UniqueName: \"kubernetes.io/projected/b25d77ec-57de-4c2a-b534-e98bf149b92a-kube-api-access-99cl5\") pod \"authentication-operator-69f744f599-jb6jw\" (UID: \"b25d77ec-57de-4c2a-b534-e98bf149b92a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb6jw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927539 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c779f8ba-7614-49f1-be6d-a9e316ec59ba-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kgl5s\" (UID: \"c779f8ba-7614-49f1-be6d-a9e316ec59ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927555 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/99bab267-639b-48b1-abc4-8c0373200a39-etcd-client\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927570 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/832097a5-4691-42b6-99cc-38679071d5ee-audit-policies\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927575 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92-images\") pod \"machine-api-operator-5694c8668f-z4h55\" (UID: \"bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z4h55" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927587 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927615 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3e2f962-69e3-4008-a45f-5c35677f7f36-config\") pod \"machine-approver-56656f9798-xvrxj\" (UID: \"f3e2f962-69e3-4008-a45f-5c35677f7f36\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xvrxj" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927629 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b25d77ec-57de-4c2a-b534-e98bf149b92a-service-ca-bundle\") pod \"authentication-operator-69f744f599-jb6jw\" (UID: \"b25d77ec-57de-4c2a-b534-e98bf149b92a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb6jw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927645 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb7zc\" (UniqueName: \"kubernetes.io/projected/eacb9f84-018a-4f64-b211-c9bedce50b9e-kube-api-access-sb7zc\") pod \"openshift-apiserver-operator-796bbdcf4f-vcj84\" (UID: \"eacb9f84-018a-4f64-b211-c9bedce50b9e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcj84" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927660 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/99bab267-639b-48b1-abc4-8c0373200a39-encryption-config\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927676 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927692 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c14fb55e-a42b-46c9-9521-6e8b60235166-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2lpzx\" (UID: \"c14fb55e-a42b-46c9-9521-6e8b60235166\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2lpzx" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927742 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/832097a5-4691-42b6-99cc-38679071d5ee-audit-dir\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927755 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbfdd647-1d64-4d35-9af2-6dee52b4c860-config\") pod \"route-controller-manager-6576b87f9c-m2qxw\" (UID: \"fbfdd647-1d64-4d35-9af2-6dee52b4c860\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927774 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b25d77ec-57de-4c2a-b534-e98bf149b92a-config\") pod \"authentication-operator-69f744f599-jb6jw\" (UID: \"b25d77ec-57de-4c2a-b534-e98bf149b92a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb6jw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927797 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c2d0611-58f8-4a7e-8280-361c80d62802-client-ca\") pod \"controller-manager-879f6c89f-xlczd\" (UID: \"3c2d0611-58f8-4a7e-8280-361c80d62802\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927815 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6de35940-bef4-4dfa-9a83-08ba29d73399-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4dn69\" (UID: \"6de35940-bef4-4dfa-9a83-08ba29d73399\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4dn69" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927831 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/99bab267-639b-48b1-abc4-8c0373200a39-image-import-ca\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.927847 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8nmp\" (UniqueName: \"kubernetes.io/projected/832097a5-4691-42b6-99cc-38679071d5ee-kube-api-access-l8nmp\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.928654 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0e2a789d-6a90-4d60-881e-9562cd92e0a7-console-config\") pod \"console-f9d7485db-m2g9h\" (UID: \"0e2a789d-6a90-4d60-881e-9562cd92e0a7\") " pod="openshift-console/console-f9d7485db-m2g9h" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.928682 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/99bab267-639b-48b1-abc4-8c0373200a39-node-pullsecrets\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.928703 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.928722 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fbfdd647-1d64-4d35-9af2-6dee52b4c860-client-ca\") pod \"route-controller-manager-6576b87f9c-m2qxw\" (UID: \"fbfdd647-1d64-4d35-9af2-6dee52b4c860\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.928736 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0e2a789d-6a90-4d60-881e-9562cd92e0a7-service-ca\") pod \"console-f9d7485db-m2g9h\" (UID: \"0e2a789d-6a90-4d60-881e-9562cd92e0a7\") " pod="openshift-console/console-f9d7485db-m2g9h" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.928752 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eacb9f84-018a-4f64-b211-c9bedce50b9e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vcj84\" (UID: \"eacb9f84-018a-4f64-b211-c9bedce50b9e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcj84" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.928766 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99bab267-639b-48b1-abc4-8c0373200a39-serving-cert\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.928783 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e2a789d-6a90-4d60-881e-9562cd92e0a7-console-serving-cert\") pod \"console-f9d7485db-m2g9h\" (UID: \"0e2a789d-6a90-4d60-881e-9562cd92e0a7\") " pod="openshift-console/console-f9d7485db-m2g9h" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.929115 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97cfbecd-36ef-409b-94e9-f607a1fa2c42-config\") pod \"etcd-operator-b45778765-nh2m9\" (UID: \"97cfbecd-36ef-409b-94e9-f607a1fa2c42\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nh2m9" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.929180 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3e2f962-69e3-4008-a45f-5c35677f7f36-auth-proxy-config\") pod \"machine-approver-56656f9798-xvrxj\" (UID: \"f3e2f962-69e3-4008-a45f-5c35677f7f36\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xvrxj" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.930020 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c779f8ba-7614-49f1-be6d-a9e316ec59ba-serving-cert\") pod \"apiserver-7bbb656c7d-kgl5s\" (UID: \"c779f8ba-7614-49f1-be6d-a9e316ec59ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.930093 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b25d77ec-57de-4c2a-b534-e98bf149b92a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jb6jw\" (UID: \"b25d77ec-57de-4c2a-b534-e98bf149b92a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb6jw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.930130 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0387af3d-8796-46b0-9282-9ecbda7fe3a7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cp5md\" (UID: \"0387af3d-8796-46b0-9282-9ecbda7fe3a7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cp5md" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.930296 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.930378 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99bab267-639b-48b1-abc4-8c0373200a39-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.930741 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3e2f962-69e3-4008-a45f-5c35677f7f36-config\") pod \"machine-approver-56656f9798-xvrxj\" (UID: \"f3e2f962-69e3-4008-a45f-5c35677f7f36\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xvrxj" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.930928 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/97cfbecd-36ef-409b-94e9-f607a1fa2c42-etcd-client\") pod \"etcd-operator-b45778765-nh2m9\" (UID: \"97cfbecd-36ef-409b-94e9-f607a1fa2c42\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nh2m9" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.930960 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b5da98c-0704-41c7-8563-707f7af93f41-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-99knh\" (UID: \"5b5da98c-0704-41c7-8563-707f7af93f41\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99knh" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.931152 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/832097a5-4691-42b6-99cc-38679071d5ee-audit-policies\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.931671 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.931918 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c2d0611-58f8-4a7e-8280-361c80d62802-serving-cert\") pod \"controller-manager-879f6c89f-xlczd\" (UID: \"3c2d0611-58f8-4a7e-8280-361c80d62802\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.932443 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.932457 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/99bab267-639b-48b1-abc4-8c0373200a39-node-pullsecrets\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.932620 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b25d77ec-57de-4c2a-b534-e98bf149b92a-service-ca-bundle\") pod \"authentication-operator-69f744f599-jb6jw\" (UID: \"b25d77ec-57de-4c2a-b534-e98bf149b92a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb6jw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.933065 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b25d77ec-57de-4c2a-b534-e98bf149b92a-config\") pod \"authentication-operator-69f744f599-jb6jw\" (UID: \"b25d77ec-57de-4c2a-b534-e98bf149b92a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb6jw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.933388 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/99bab267-639b-48b1-abc4-8c0373200a39-image-import-ca\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.933429 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0e2a789d-6a90-4d60-881e-9562cd92e0a7-service-ca\") pod \"console-f9d7485db-m2g9h\" (UID: \"0e2a789d-6a90-4d60-881e-9562cd92e0a7\") " pod="openshift-console/console-f9d7485db-m2g9h" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.934073 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/99bab267-639b-48b1-abc4-8c0373200a39-etcd-client\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.935523 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c2d0611-58f8-4a7e-8280-361c80d62802-client-ca\") pod \"controller-manager-879f6c89f-xlczd\" (UID: \"3c2d0611-58f8-4a7e-8280-361c80d62802\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.935577 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c2d0611-58f8-4a7e-8280-361c80d62802-config\") pod \"controller-manager-879f6c89f-xlczd\" (UID: \"3c2d0611-58f8-4a7e-8280-361c80d62802\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.935910 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c779f8ba-7614-49f1-be6d-a9e316ec59ba-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kgl5s\" (UID: \"c779f8ba-7614-49f1-be6d-a9e316ec59ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.936071 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fbfdd647-1d64-4d35-9af2-6dee52b4c860-client-ca\") pod \"route-controller-manager-6576b87f9c-m2qxw\" (UID: \"fbfdd647-1d64-4d35-9af2-6dee52b4c860\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.936431 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.936982 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99bab267-639b-48b1-abc4-8c0373200a39-serving-cert\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.937047 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d288ee23-1753-48f2-ab82-736defe5fe18-images\") pod \"machine-config-operator-74547568cd-9thpj\" (UID: \"d288ee23-1753-48f2-ab82-736defe5fe18\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9thpj" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.937073 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/554abf87-b1ba-45b1-8130-95b40da3b8bf-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mrprd\" (UID: \"554abf87-b1ba-45b1-8130-95b40da3b8bf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mrprd" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.937098 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad3c7510-ccc3-453a-91ae-b1f2cf88d2ec-trusted-ca\") pod \"console-operator-58897d9998-kvp9d\" (UID: \"ad3c7510-ccc3-453a-91ae-b1f2cf88d2ec\") " pod="openshift-console-operator/console-operator-58897d9998-kvp9d" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.937132 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8tm6\" (UniqueName: \"kubernetes.io/projected/bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92-kube-api-access-k8tm6\") pod \"machine-api-operator-5694c8668f-z4h55\" (UID: \"bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z4h55" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.937154 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6de35940-bef4-4dfa-9a83-08ba29d73399-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4dn69\" (UID: \"6de35940-bef4-4dfa-9a83-08ba29d73399\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4dn69" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.937207 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6de35940-bef4-4dfa-9a83-08ba29d73399-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4dn69\" (UID: \"6de35940-bef4-4dfa-9a83-08ba29d73399\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4dn69" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.937227 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8f71ba3e-c687-4ff7-9475-1e18ded764f6-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dwsm5\" (UID: \"8f71ba3e-c687-4ff7-9475-1e18ded764f6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dwsm5" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.937264 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmd5b\" (UniqueName: \"kubernetes.io/projected/5b5da98c-0704-41c7-8563-707f7af93f41-kube-api-access-hmd5b\") pod \"cluster-samples-operator-665b6dd947-99knh\" (UID: \"5b5da98c-0704-41c7-8563-707f7af93f41\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99knh" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.937284 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d288ee23-1753-48f2-ab82-736defe5fe18-proxy-tls\") pod \"machine-config-operator-74547568cd-9thpj\" (UID: \"d288ee23-1753-48f2-ab82-736defe5fe18\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9thpj" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.937305 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c779f8ba-7614-49f1-be6d-a9e316ec59ba-audit-policies\") pod \"apiserver-7bbb656c7d-kgl5s\" (UID: \"c779f8ba-7614-49f1-be6d-a9e316ec59ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.937345 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.937367 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0e2a789d-6a90-4d60-881e-9562cd92e0a7-oauth-serving-cert\") pod \"console-f9d7485db-m2g9h\" (UID: \"0e2a789d-6a90-4d60-881e-9562cd92e0a7\") " pod="openshift-console/console-f9d7485db-m2g9h" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.937385 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f71ba3e-c687-4ff7-9475-1e18ded764f6-serving-cert\") pod \"openshift-config-operator-7777fb866f-dwsm5\" (UID: \"8f71ba3e-c687-4ff7-9475-1e18ded764f6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dwsm5" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.937418 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c779f8ba-7614-49f1-be6d-a9e316ec59ba-etcd-client\") pod \"apiserver-7bbb656c7d-kgl5s\" (UID: \"c779f8ba-7614-49f1-be6d-a9e316ec59ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.937437 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/554abf87-b1ba-45b1-8130-95b40da3b8bf-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mrprd\" (UID: \"554abf87-b1ba-45b1-8130-95b40da3b8bf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mrprd" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.938010 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c779f8ba-7614-49f1-be6d-a9e316ec59ba-audit-policies\") pod \"apiserver-7bbb656c7d-kgl5s\" (UID: \"c779f8ba-7614-49f1-be6d-a9e316ec59ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.938008 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.938016 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad3c7510-ccc3-453a-91ae-b1f2cf88d2ec-trusted-ca\") pod \"console-operator-58897d9998-kvp9d\" (UID: \"ad3c7510-ccc3-453a-91ae-b1f2cf88d2ec\") " pod="openshift-console-operator/console-operator-58897d9998-kvp9d" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.938530 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0e2a789d-6a90-4d60-881e-9562cd92e0a7-oauth-serving-cert\") pod \"console-f9d7485db-m2g9h\" (UID: \"0e2a789d-6a90-4d60-881e-9562cd92e0a7\") " pod="openshift-console/console-f9d7485db-m2g9h" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.938569 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-z4h55\" (UID: \"bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z4h55" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.938645 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6de35940-bef4-4dfa-9a83-08ba29d73399-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4dn69\" (UID: \"6de35940-bef4-4dfa-9a83-08ba29d73399\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4dn69" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.939229 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.939369 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8f71ba3e-c687-4ff7-9475-1e18ded764f6-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dwsm5\" (UID: \"8f71ba3e-c687-4ff7-9475-1e18ded764f6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dwsm5" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.939567 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.939733 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e2a789d-6a90-4d60-881e-9562cd92e0a7-console-serving-cert\") pod \"console-f9d7485db-m2g9h\" (UID: \"0e2a789d-6a90-4d60-881e-9562cd92e0a7\") " pod="openshift-console/console-f9d7485db-m2g9h" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.940161 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.940173 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eacb9f84-018a-4f64-b211-c9bedce50b9e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vcj84\" (UID: \"eacb9f84-018a-4f64-b211-c9bedce50b9e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcj84" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.940447 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c779f8ba-7614-49f1-be6d-a9e316ec59ba-etcd-client\") pod \"apiserver-7bbb656c7d-kgl5s\" (UID: \"c779f8ba-7614-49f1-be6d-a9e316ec59ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.940473 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92-config\") pod \"machine-api-operator-5694c8668f-z4h55\" (UID: \"bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z4h55" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.944801 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.945746 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f3e2f962-69e3-4008-a45f-5c35677f7f36-machine-approver-tls\") pod \"machine-approver-56656f9798-xvrxj\" (UID: \"f3e2f962-69e3-4008-a45f-5c35677f7f36\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xvrxj" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.945830 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nf7jb"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.945846 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f71ba3e-c687-4ff7-9475-1e18ded764f6-serving-cert\") pod \"openshift-config-operator-7777fb866f-dwsm5\" (UID: \"8f71ba3e-c687-4ff7-9475-1e18ded764f6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dwsm5" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.946411 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcj84"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.947489 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9thpj"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.948558 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.950915 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/99bab267-639b-48b1-abc4-8c0373200a39-encryption-config\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.951709 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbfdd647-1d64-4d35-9af2-6dee52b4c860-config\") pod \"route-controller-manager-6576b87f9c-m2qxw\" (UID: \"fbfdd647-1d64-4d35-9af2-6dee52b4c860\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.952601 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.954336 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.954610 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6jl75"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.956556 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gnjmm"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.958114 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-kvp9d"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.958126 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbfdd647-1d64-4d35-9af2-6dee52b4c860-serving-cert\") pod \"route-controller-manager-6576b87f9c-m2qxw\" (UID: \"fbfdd647-1d64-4d35-9af2-6dee52b4c860\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.959413 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dwsm5"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.960254 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97cfbecd-36ef-409b-94e9-f607a1fa2c42-serving-cert\") pod \"etcd-operator-b45778765-nh2m9\" (UID: \"97cfbecd-36ef-409b-94e9-f607a1fa2c42\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nh2m9" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.960376 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c779f8ba-7614-49f1-be6d-a9e316ec59ba-encryption-config\") pod \"apiserver-7bbb656c7d-kgl5s\" (UID: \"c779f8ba-7614-49f1-be6d-a9e316ec59ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.960389 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.960396 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0387af3d-8796-46b0-9282-9ecbda7fe3a7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cp5md\" (UID: \"0387af3d-8796-46b0-9282-9ecbda7fe3a7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cp5md" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.960461 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0e2a789d-6a90-4d60-881e-9562cd92e0a7-console-oauth-config\") pod \"console-f9d7485db-m2g9h\" (UID: \"0e2a789d-6a90-4d60-881e-9562cd92e0a7\") " pod="openshift-console/console-f9d7485db-m2g9h" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.960873 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b25d77ec-57de-4c2a-b534-e98bf149b92a-serving-cert\") pod \"authentication-operator-69f744f599-jb6jw\" (UID: \"b25d77ec-57de-4c2a-b534-e98bf149b92a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb6jw" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.962157 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-sv7xw"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.963686 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad3c7510-ccc3-453a-91ae-b1f2cf88d2ec-serving-cert\") pod \"console-operator-58897d9998-kvp9d\" (UID: \"ad3c7510-ccc3-453a-91ae-b1f2cf88d2ec\") " pod="openshift-console-operator/console-operator-58897d9998-kvp9d" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.964044 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-r6cxt"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.965968 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nh2m9"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.967524 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cp5md"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.968402 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.969094 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495040-5mkf8"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.970094 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l2t56"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.971057 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-vhtdt"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.971804 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vhtdt" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.972021 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4nghl"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.973022 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-chzp2"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.973033 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4nghl" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.974021 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-44gfk"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.974952 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mrprd"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.975891 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zkbcz"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.976886 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-z4h55"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.978095 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99knh"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.979390 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xtzbx"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.981191 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h8b4r"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.982227 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzg2m"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.983318 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jcdhl"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.984713 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4nghl"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.985626 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kfqcf"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.986555 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v68nn"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.987612 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbrmk"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.988345 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.989007 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ljnh7"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.990011 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vhtdt"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.991108 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sq9mx"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.998578 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-zxb4v"] Jan 29 16:12:13 crc kubenswrapper[4714]: I0129 16:12:13.999435 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-zxb4v" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.008143 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.028284 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.039020 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c14fb55e-a42b-46c9-9521-6e8b60235166-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2lpzx\" (UID: \"c14fb55e-a42b-46c9-9521-6e8b60235166\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2lpzx" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.039066 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-789j9\" (UniqueName: \"kubernetes.io/projected/fc8e2d06-1cc2-4ea7-8d87-340d28740e20-kube-api-access-789j9\") pod \"kube-storage-version-migrator-operator-b67b599dd-ch6wr\" (UID: \"fc8e2d06-1cc2-4ea7-8d87-340d28740e20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ch6wr" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.039105 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvzb6\" (UniqueName: \"kubernetes.io/projected/d288ee23-1753-48f2-ab82-736defe5fe18-kube-api-access-tvzb6\") pod \"machine-config-operator-74547568cd-9thpj\" (UID: \"d288ee23-1753-48f2-ab82-736defe5fe18\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9thpj" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.039158 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c14fb55e-a42b-46c9-9521-6e8b60235166-config\") pod \"kube-controller-manager-operator-78b949d7b-2lpzx\" (UID: \"c14fb55e-a42b-46c9-9521-6e8b60235166\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2lpzx" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.039269 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c14fb55e-a42b-46c9-9521-6e8b60235166-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2lpzx\" (UID: \"c14fb55e-a42b-46c9-9521-6e8b60235166\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2lpzx" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.039317 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/554abf87-b1ba-45b1-8130-95b40da3b8bf-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mrprd\" (UID: \"554abf87-b1ba-45b1-8130-95b40da3b8bf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mrprd" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.039343 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d288ee23-1753-48f2-ab82-736defe5fe18-images\") pod \"machine-config-operator-74547568cd-9thpj\" (UID: \"d288ee23-1753-48f2-ab82-736defe5fe18\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9thpj" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.039376 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d288ee23-1753-48f2-ab82-736defe5fe18-proxy-tls\") pod \"machine-config-operator-74547568cd-9thpj\" (UID: \"d288ee23-1753-48f2-ab82-736defe5fe18\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9thpj" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.039421 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/554abf87-b1ba-45b1-8130-95b40da3b8bf-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mrprd\" (UID: \"554abf87-b1ba-45b1-8130-95b40da3b8bf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mrprd" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.039454 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45cdw\" (UniqueName: \"kubernetes.io/projected/2eab9b06-06e4-4f58-ab86-ab1bf3b5cc96-kube-api-access-45cdw\") pod \"machine-config-controller-84d6567774-nf7jb\" (UID: \"2eab9b06-06e4-4f58-ab86-ab1bf3b5cc96\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nf7jb" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.039491 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2eab9b06-06e4-4f58-ab86-ab1bf3b5cc96-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nf7jb\" (UID: \"2eab9b06-06e4-4f58-ab86-ab1bf3b5cc96\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nf7jb" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.039508 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwvc4\" (UniqueName: \"kubernetes.io/projected/554abf87-b1ba-45b1-8130-95b40da3b8bf-kube-api-access-zwvc4\") pod \"openshift-controller-manager-operator-756b6f6bc6-mrprd\" (UID: \"554abf87-b1ba-45b1-8130-95b40da3b8bf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mrprd" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.039524 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2eab9b06-06e4-4f58-ab86-ab1bf3b5cc96-proxy-tls\") pod \"machine-config-controller-84d6567774-nf7jb\" (UID: \"2eab9b06-06e4-4f58-ab86-ab1bf3b5cc96\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nf7jb" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.039539 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc8e2d06-1cc2-4ea7-8d87-340d28740e20-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-ch6wr\" (UID: \"fc8e2d06-1cc2-4ea7-8d87-340d28740e20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ch6wr" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.039563 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d288ee23-1753-48f2-ab82-736defe5fe18-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9thpj\" (UID: \"d288ee23-1753-48f2-ab82-736defe5fe18\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9thpj" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.039580 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8e2d06-1cc2-4ea7-8d87-340d28740e20-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-ch6wr\" (UID: \"fc8e2d06-1cc2-4ea7-8d87-340d28740e20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ch6wr" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.040794 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2eab9b06-06e4-4f58-ab86-ab1bf3b5cc96-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nf7jb\" (UID: \"2eab9b06-06e4-4f58-ab86-ab1bf3b5cc96\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nf7jb" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.040956 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d288ee23-1753-48f2-ab82-736defe5fe18-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9thpj\" (UID: \"d288ee23-1753-48f2-ab82-736defe5fe18\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9thpj" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.042896 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2eab9b06-06e4-4f58-ab86-ab1bf3b5cc96-proxy-tls\") pod \"machine-config-controller-84d6567774-nf7jb\" (UID: \"2eab9b06-06e4-4f58-ab86-ab1bf3b5cc96\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nf7jb" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.048158 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.068974 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.088402 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.109142 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.128439 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.149248 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.168558 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.189245 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.208577 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.228795 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.249273 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.253795 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d288ee23-1753-48f2-ab82-736defe5fe18-proxy-tls\") pod \"machine-config-operator-74547568cd-9thpj\" (UID: \"d288ee23-1753-48f2-ab82-736defe5fe18\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9thpj" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.268769 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.271360 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d288ee23-1753-48f2-ab82-736defe5fe18-images\") pod \"machine-config-operator-74547568cd-9thpj\" (UID: \"d288ee23-1753-48f2-ab82-736defe5fe18\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9thpj" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.288045 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.308324 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.328848 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.334252 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/554abf87-b1ba-45b1-8130-95b40da3b8bf-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mrprd\" (UID: \"554abf87-b1ba-45b1-8130-95b40da3b8bf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mrprd" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.349455 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.350755 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/554abf87-b1ba-45b1-8130-95b40da3b8bf-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mrprd\" (UID: \"554abf87-b1ba-45b1-8130-95b40da3b8bf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mrprd" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.369563 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.388392 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.407816 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.428556 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.448551 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.469576 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.487796 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.516882 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.530415 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.549121 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.568979 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.589139 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.593415 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c14fb55e-a42b-46c9-9521-6e8b60235166-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2lpzx\" (UID: \"c14fb55e-a42b-46c9-9521-6e8b60235166\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2lpzx" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.609886 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.610644 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c14fb55e-a42b-46c9-9521-6e8b60235166-config\") pod \"kube-controller-manager-operator-78b949d7b-2lpzx\" (UID: \"c14fb55e-a42b-46c9-9521-6e8b60235166\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2lpzx" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.648711 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.668873 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.688897 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.715195 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.729708 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.748819 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.754664 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc8e2d06-1cc2-4ea7-8d87-340d28740e20-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-ch6wr\" (UID: \"fc8e2d06-1cc2-4ea7-8d87-340d28740e20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ch6wr" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.769985 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.788103 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.809663 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.828608 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.848197 4714 request.go:700] Waited for 1.000785101s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0 Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.850571 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.862200 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8e2d06-1cc2-4ea7-8d87-340d28740e20-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-ch6wr\" (UID: \"fc8e2d06-1cc2-4ea7-8d87-340d28740e20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ch6wr" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.868328 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.888971 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.908542 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.929553 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.949492 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.968741 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 16:12:14 crc kubenswrapper[4714]: I0129 16:12:14.988540 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.028352 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.048843 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.068592 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.088626 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.109486 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.128841 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.148733 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.169332 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.188844 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.209563 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.229015 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.248853 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.268798 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.288732 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.308475 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.328849 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.348572 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.368688 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.387777 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.408985 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.428830 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.450155 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.469033 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.489230 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.509157 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.529109 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.571142 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x5xk\" (UniqueName: \"kubernetes.io/projected/ad3c7510-ccc3-453a-91ae-b1f2cf88d2ec-kube-api-access-4x5xk\") pod \"console-operator-58897d9998-kvp9d\" (UID: \"ad3c7510-ccc3-453a-91ae-b1f2cf88d2ec\") " pod="openshift-console-operator/console-operator-58897d9998-kvp9d" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.597622 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fbcn\" (UniqueName: \"kubernetes.io/projected/42b66dc3-a385-4350-a943-50f062da35f7-kube-api-access-2fbcn\") pod \"downloads-7954f5f757-fn75b\" (UID: \"42b66dc3-a385-4350-a943-50f062da35f7\") " pod="openshift-console/downloads-7954f5f757-fn75b" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.607224 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-kvp9d" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.619689 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnhkm\" (UniqueName: \"kubernetes.io/projected/c779f8ba-7614-49f1-be6d-a9e316ec59ba-kube-api-access-rnhkm\") pod \"apiserver-7bbb656c7d-kgl5s\" (UID: \"c779f8ba-7614-49f1-be6d-a9e316ec59ba\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.635726 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjjl8\" (UniqueName: \"kubernetes.io/projected/97cfbecd-36ef-409b-94e9-f607a1fa2c42-kube-api-access-rjjl8\") pod \"etcd-operator-b45778765-nh2m9\" (UID: \"97cfbecd-36ef-409b-94e9-f607a1fa2c42\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nh2m9" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.647584 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0387af3d-8796-46b0-9282-9ecbda7fe3a7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cp5md\" (UID: \"0387af3d-8796-46b0-9282-9ecbda7fe3a7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cp5md" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.666771 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn272\" (UniqueName: \"kubernetes.io/projected/0e2a789d-6a90-4d60-881e-9562cd92e0a7-kube-api-access-bn272\") pod \"console-f9d7485db-m2g9h\" (UID: \"0e2a789d-6a90-4d60-881e-9562cd92e0a7\") " pod="openshift-console/console-f9d7485db-m2g9h" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.680699 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.686467 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hv4x\" (UniqueName: \"kubernetes.io/projected/f3e2f962-69e3-4008-a45f-5c35677f7f36-kube-api-access-5hv4x\") pod \"machine-approver-56656f9798-xvrxj\" (UID: \"f3e2f962-69e3-4008-a45f-5c35677f7f36\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xvrxj" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.707746 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8qsw\" (UniqueName: \"kubernetes.io/projected/6de35940-bef4-4dfa-9a83-08ba29d73399-kube-api-access-j8qsw\") pod \"cluster-image-registry-operator-dc59b4c8b-4dn69\" (UID: \"6de35940-bef4-4dfa-9a83-08ba29d73399\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4dn69" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.716399 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-m2g9h" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.724879 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk8rh\" (UniqueName: \"kubernetes.io/projected/3c2d0611-58f8-4a7e-8280-361c80d62802-kube-api-access-pk8rh\") pod \"controller-manager-879f6c89f-xlczd\" (UID: \"3c2d0611-58f8-4a7e-8280-361c80d62802\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.750223 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knzbq\" (UniqueName: \"kubernetes.io/projected/8f71ba3e-c687-4ff7-9475-1e18ded764f6-kube-api-access-knzbq\") pod \"openshift-config-operator-7777fb866f-dwsm5\" (UID: \"8f71ba3e-c687-4ff7-9475-1e18ded764f6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dwsm5" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.765247 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srhd7\" (UniqueName: \"kubernetes.io/projected/fbfdd647-1d64-4d35-9af2-6dee52b4c860-kube-api-access-srhd7\") pod \"route-controller-manager-6576b87f9c-m2qxw\" (UID: \"fbfdd647-1d64-4d35-9af2-6dee52b4c860\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.777292 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dwsm5" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.786608 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2rbv\" (UniqueName: \"kubernetes.io/projected/99bab267-639b-48b1-abc4-8c0373200a39-kube-api-access-f2rbv\") pod \"apiserver-76f77b778f-6jl75\" (UID: \"99bab267-639b-48b1-abc4-8c0373200a39\") " pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.794037 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fn75b" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.809091 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nh2m9" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.811157 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99cl5\" (UniqueName: \"kubernetes.io/projected/b25d77ec-57de-4c2a-b534-e98bf149b92a-kube-api-access-99cl5\") pod \"authentication-operator-69f744f599-jb6jw\" (UID: \"b25d77ec-57de-4c2a-b534-e98bf149b92a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jb6jw" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.814710 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cp5md" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.828896 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb7zc\" (UniqueName: \"kubernetes.io/projected/eacb9f84-018a-4f64-b211-c9bedce50b9e-kube-api-access-sb7zc\") pod \"openshift-apiserver-operator-796bbdcf4f-vcj84\" (UID: \"eacb9f84-018a-4f64-b211-c9bedce50b9e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcj84" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.835743 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xvrxj" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.844741 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jb6jw" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.846297 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8nmp\" (UniqueName: \"kubernetes.io/projected/832097a5-4691-42b6-99cc-38679071d5ee-kube-api-access-l8nmp\") pod \"oauth-openshift-558db77b4-h8b4r\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.847551 4714 request.go:700] Waited for 1.910213618s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/machine-api-operator/token Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.866969 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8tm6\" (UniqueName: \"kubernetes.io/projected/bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92-kube-api-access-k8tm6\") pod \"machine-api-operator-5694c8668f-z4h55\" (UID: \"bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z4h55" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.871054 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.898443 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmd5b\" (UniqueName: \"kubernetes.io/projected/5b5da98c-0704-41c7-8563-707f7af93f41-kube-api-access-hmd5b\") pod \"cluster-samples-operator-665b6dd947-99knh\" (UID: \"5b5da98c-0704-41c7-8563-707f7af93f41\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99knh" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.906205 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6de35940-bef4-4dfa-9a83-08ba29d73399-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4dn69\" (UID: \"6de35940-bef4-4dfa-9a83-08ba29d73399\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4dn69" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.909748 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.916429 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-kvp9d"] Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.916691 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.930102 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.934044 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s"] Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.947589 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xvrxj" event={"ID":"f3e2f962-69e3-4008-a45f-5c35677f7f36","Type":"ContainerStarted","Data":"e6417ed16b50250c1230123b28ea7f944dd6f6be42cb84b33ddc22f04045662b"} Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.948544 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.953137 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcj84" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.968817 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.987826 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-m2g9h"] Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.990259 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:15 crc kubenswrapper[4714]: I0129 16:12:15.991501 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.005830 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-z4h55" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.013621 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.023300 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4dn69" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.029017 4714 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.051836 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.070122 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.085042 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99knh" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.089718 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.100697 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.128003 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-789j9\" (UniqueName: \"kubernetes.io/projected/fc8e2d06-1cc2-4ea7-8d87-340d28740e20-kube-api-access-789j9\") pod \"kube-storage-version-migrator-operator-b67b599dd-ch6wr\" (UID: \"fc8e2d06-1cc2-4ea7-8d87-340d28740e20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ch6wr" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.154647 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvzb6\" (UniqueName: \"kubernetes.io/projected/d288ee23-1753-48f2-ab82-736defe5fe18-kube-api-access-tvzb6\") pod \"machine-config-operator-74547568cd-9thpj\" (UID: \"d288ee23-1753-48f2-ab82-736defe5fe18\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9thpj" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.167028 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c14fb55e-a42b-46c9-9521-6e8b60235166-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2lpzx\" (UID: \"c14fb55e-a42b-46c9-9521-6e8b60235166\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2lpzx" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.181752 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2lpzx" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.188822 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwvc4\" (UniqueName: \"kubernetes.io/projected/554abf87-b1ba-45b1-8130-95b40da3b8bf-kube-api-access-zwvc4\") pod \"openshift-controller-manager-operator-756b6f6bc6-mrprd\" (UID: \"554abf87-b1ba-45b1-8130-95b40da3b8bf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mrprd" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.195828 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jb6jw"] Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.198035 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ch6wr" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.211242 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45cdw\" (UniqueName: \"kubernetes.io/projected/2eab9b06-06e4-4f58-ab86-ab1bf3b5cc96-kube-api-access-45cdw\") pod \"machine-config-controller-84d6567774-nf7jb\" (UID: \"2eab9b06-06e4-4f58-ab86-ab1bf3b5cc96\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nf7jb" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.216977 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9thpj" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.217032 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xlczd"] Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.252095 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcj84"] Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.262728 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mrprd" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.272259 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfkhr\" (UniqueName: \"kubernetes.io/projected/1d9869e2-6f55-4246-8ed0-b8af9dab3f74-kube-api-access-gfkhr\") pod \"ingress-operator-5b745b69d9-r6cxt\" (UID: \"1d9869e2-6f55-4246-8ed0-b8af9dab3f74\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r6cxt" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.272311 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80515d06-c09e-4c9d-a90f-43cc84edf4c9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-l2t56\" (UID: \"80515d06-c09e-4c9d-a90f-43cc84edf4c9\") " pod="openshift-marketplace/marketplace-operator-79b997595-l2t56" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.272399 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfc66\" (UniqueName: \"kubernetes.io/projected/80515d06-c09e-4c9d-a90f-43cc84edf4c9-kube-api-access-xfc66\") pod \"marketplace-operator-79b997595-l2t56\" (UID: \"80515d06-c09e-4c9d-a90f-43cc84edf4c9\") " pod="openshift-marketplace/marketplace-operator-79b997595-l2t56" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.272526 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfz7c\" (UniqueName: \"kubernetes.io/projected/cc84f60e-094e-4924-b6f1-f0a8ab81aa4e-kube-api-access-zfz7c\") pod \"catalog-operator-68c6474976-jzg2m\" (UID: \"cc84f60e-094e-4924-b6f1-f0a8ab81aa4e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzg2m" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.272588 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48be8ad8-4c02-4bea-a143-449763b39d54-trusted-ca\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.272626 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/48be8ad8-4c02-4bea-a143-449763b39d54-registry-tls\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.272685 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7a1dfb55-8680-4cbe-bd78-caca2e847caf-stats-auth\") pod \"router-default-5444994796-lz6mw\" (UID: \"7a1dfb55-8680-4cbe-bd78-caca2e847caf\") " pod="openshift-ingress/router-default-5444994796-lz6mw" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.272710 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/48be8ad8-4c02-4bea-a143-449763b39d54-bound-sa-token\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.272777 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjnvs\" (UniqueName: \"kubernetes.io/projected/b7cf219f-4e80-47fc-b349-ea5c7eab6d9d-kube-api-access-wjnvs\") pod \"dns-operator-744455d44c-sv7xw\" (UID: \"b7cf219f-4e80-47fc-b349-ea5c7eab6d9d\") " pod="openshift-dns-operator/dns-operator-744455d44c-sv7xw" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.272869 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/48be8ad8-4c02-4bea-a143-449763b39d54-registry-certificates\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.272992 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cc84f60e-094e-4924-b6f1-f0a8ab81aa4e-srv-cert\") pod \"catalog-operator-68c6474976-jzg2m\" (UID: \"cc84f60e-094e-4924-b6f1-f0a8ab81aa4e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzg2m" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.273024 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6f4x\" (UniqueName: \"kubernetes.io/projected/7a1dfb55-8680-4cbe-bd78-caca2e847caf-kube-api-access-k6f4x\") pod \"router-default-5444994796-lz6mw\" (UID: \"7a1dfb55-8680-4cbe-bd78-caca2e847caf\") " pod="openshift-ingress/router-default-5444994796-lz6mw" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.273068 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1d9869e2-6f55-4246-8ed0-b8af9dab3f74-bound-sa-token\") pod \"ingress-operator-5b745b69d9-r6cxt\" (UID: \"1d9869e2-6f55-4246-8ed0-b8af9dab3f74\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r6cxt" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.273093 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a1dfb55-8680-4cbe-bd78-caca2e847caf-service-ca-bundle\") pod \"router-default-5444994796-lz6mw\" (UID: \"7a1dfb55-8680-4cbe-bd78-caca2e847caf\") " pod="openshift-ingress/router-default-5444994796-lz6mw" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.273320 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cc84f60e-094e-4924-b6f1-f0a8ab81aa4e-profile-collector-cert\") pod \"catalog-operator-68c6474976-jzg2m\" (UID: \"cc84f60e-094e-4924-b6f1-f0a8ab81aa4e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzg2m" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.273356 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n67sj\" (UniqueName: \"kubernetes.io/projected/2fa1ede8-3ea3-421d-929d-f6bf9cc1db0e-kube-api-access-n67sj\") pod \"migrator-59844c95c7-xtzbx\" (UID: \"2fa1ede8-3ea3-421d-929d-f6bf9cc1db0e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xtzbx" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.273396 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d9869e2-6f55-4246-8ed0-b8af9dab3f74-metrics-tls\") pod \"ingress-operator-5b745b69d9-r6cxt\" (UID: \"1d9869e2-6f55-4246-8ed0-b8af9dab3f74\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r6cxt" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.273462 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a1dfb55-8680-4cbe-bd78-caca2e847caf-metrics-certs\") pod \"router-default-5444994796-lz6mw\" (UID: \"7a1dfb55-8680-4cbe-bd78-caca2e847caf\") " pod="openshift-ingress/router-default-5444994796-lz6mw" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.273485 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d9869e2-6f55-4246-8ed0-b8af9dab3f74-trusted-ca\") pod \"ingress-operator-5b745b69d9-r6cxt\" (UID: \"1d9869e2-6f55-4246-8ed0-b8af9dab3f74\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r6cxt" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.273596 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/80515d06-c09e-4c9d-a90f-43cc84edf4c9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-l2t56\" (UID: \"80515d06-c09e-4c9d-a90f-43cc84edf4c9\") " pod="openshift-marketplace/marketplace-operator-79b997595-l2t56" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.273627 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh5km\" (UniqueName: \"kubernetes.io/projected/48be8ad8-4c02-4bea-a143-449763b39d54-kube-api-access-wh5km\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.273661 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/48be8ad8-4c02-4bea-a143-449763b39d54-installation-pull-secrets\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.273686 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/48be8ad8-4c02-4bea-a143-449763b39d54-ca-trust-extracted\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.273744 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.273816 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b7cf219f-4e80-47fc-b349-ea5c7eab6d9d-metrics-tls\") pod \"dns-operator-744455d44c-sv7xw\" (UID: \"b7cf219f-4e80-47fc-b349-ea5c7eab6d9d\") " pod="openshift-dns-operator/dns-operator-744455d44c-sv7xw" Jan 29 16:12:16 crc kubenswrapper[4714]: E0129 16:12:16.274595 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:16.774581357 +0000 UTC m=+143.295082467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.275545 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7a1dfb55-8680-4cbe-bd78-caca2e847caf-default-certificate\") pod \"router-default-5444994796-lz6mw\" (UID: \"7a1dfb55-8680-4cbe-bd78-caca2e847caf\") " pod="openshift-ingress/router-default-5444994796-lz6mw" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.329585 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fn75b"] Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.329674 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dwsm5"] Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.333076 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nh2m9"] Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.341275 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cp5md"] Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.395111 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.395329 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfz7c\" (UniqueName: \"kubernetes.io/projected/cc84f60e-094e-4924-b6f1-f0a8ab81aa4e-kube-api-access-zfz7c\") pod \"catalog-operator-68c6474976-jzg2m\" (UID: \"cc84f60e-094e-4924-b6f1-f0a8ab81aa4e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzg2m" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.395402 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b632d84-c711-419a-9e24-bdb4c6e9aef6-config\") pod \"kube-apiserver-operator-766d6c64bb-chzp2\" (UID: \"1b632d84-c711-419a-9e24-bdb4c6e9aef6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-chzp2" Jan 29 16:12:16 crc kubenswrapper[4714]: E0129 16:12:16.395417 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:16.895398652 +0000 UTC m=+143.415899772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.395456 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/38f843fe-c20b-4bc7-8f45-4ccd4b7be5a5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ljnh7\" (UID: \"38f843fe-c20b-4bc7-8f45-4ccd4b7be5a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ljnh7" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.395473 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45vkr\" (UniqueName: \"kubernetes.io/projected/fcaee576-dff0-4a67-a0b1-7347b3030729-kube-api-access-45vkr\") pod \"dns-default-jcdhl\" (UID: \"fcaee576-dff0-4a67-a0b1-7347b3030729\") " pod="openshift-dns/dns-default-jcdhl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.395535 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/714cef39-2960-4a25-ac81-a4e65a115eb3-plugins-dir\") pod \"csi-hostpathplugin-4nghl\" (UID: \"714cef39-2960-4a25-ac81-a4e65a115eb3\") " pod="hostpath-provisioner/csi-hostpathplugin-4nghl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.394782 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h8b4r"] Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.395561 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/48be8ad8-4c02-4bea-a143-449763b39d54-registry-tls\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.395580 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48be8ad8-4c02-4bea-a143-449763b39d54-trusted-ca\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.395599 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fzjx\" (UniqueName: \"kubernetes.io/projected/cb979c55-3027-4d92-94b9-cd17c32e6331-kube-api-access-7fzjx\") pod \"collect-profiles-29495040-5mkf8\" (UID: \"cb979c55-3027-4d92-94b9-cd17c32e6331\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5mkf8" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.395625 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqcjl\" (UniqueName: \"kubernetes.io/projected/38f843fe-c20b-4bc7-8f45-4ccd4b7be5a5-kube-api-access-fqcjl\") pod \"olm-operator-6b444d44fb-ljnh7\" (UID: \"38f843fe-c20b-4bc7-8f45-4ccd4b7be5a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ljnh7" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.395640 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb979c55-3027-4d92-94b9-cd17c32e6331-secret-volume\") pod \"collect-profiles-29495040-5mkf8\" (UID: \"cb979c55-3027-4d92-94b9-cd17c32e6331\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5mkf8" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.395655 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf5f9\" (UniqueName: \"kubernetes.io/projected/f18250a8-66c1-445d-9452-081de13b24f7-kube-api-access-lf5f9\") pod \"multus-admission-controller-857f4d67dd-zkbcz\" (UID: \"f18250a8-66c1-445d-9452-081de13b24f7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zkbcz" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.395673 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfkhq\" (UniqueName: \"kubernetes.io/projected/1fd5b799-74c2-4ffa-b3d9-6745c66ba28f-kube-api-access-vfkhq\") pod \"machine-config-server-zxb4v\" (UID: \"1fd5b799-74c2-4ffa-b3d9-6745c66ba28f\") " pod="openshift-machine-config-operator/machine-config-server-zxb4v" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.395690 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7a1dfb55-8680-4cbe-bd78-caca2e847caf-stats-auth\") pod \"router-default-5444994796-lz6mw\" (UID: \"7a1dfb55-8680-4cbe-bd78-caca2e847caf\") " pod="openshift-ingress/router-default-5444994796-lz6mw" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.395713 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/48be8ad8-4c02-4bea-a143-449763b39d54-bound-sa-token\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.395741 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjnvs\" (UniqueName: \"kubernetes.io/projected/b7cf219f-4e80-47fc-b349-ea5c7eab6d9d-kube-api-access-wjnvs\") pod \"dns-operator-744455d44c-sv7xw\" (UID: \"b7cf219f-4e80-47fc-b349-ea5c7eab6d9d\") " pod="openshift-dns-operator/dns-operator-744455d44c-sv7xw" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.395757 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab-tmpfs\") pod \"packageserver-d55dfcdfc-v68nn\" (UID: \"2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v68nn" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.395774 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6k2j\" (UniqueName: \"kubernetes.io/projected/8062d225-aa57-48df-bf28-2254ecc4f635-kube-api-access-z6k2j\") pod \"control-plane-machine-set-operator-78cbb6b69f-sq9mx\" (UID: \"8062d225-aa57-48df-bf28-2254ecc4f635\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sq9mx" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.395797 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1fd5b799-74c2-4ffa-b3d9-6745c66ba28f-node-bootstrap-token\") pod \"machine-config-server-zxb4v\" (UID: \"1fd5b799-74c2-4ffa-b3d9-6745c66ba28f\") " pod="openshift-machine-config-operator/machine-config-server-zxb4v" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.395811 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p4cl\" (UniqueName: \"kubernetes.io/projected/2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab-kube-api-access-4p4cl\") pod \"packageserver-d55dfcdfc-v68nn\" (UID: \"2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v68nn" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.395824 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwsfq\" (UniqueName: \"kubernetes.io/projected/714cef39-2960-4a25-ac81-a4e65a115eb3-kube-api-access-xwsfq\") pod \"csi-hostpathplugin-4nghl\" (UID: \"714cef39-2960-4a25-ac81-a4e65a115eb3\") " pod="hostpath-provisioner/csi-hostpathplugin-4nghl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.395843 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8062d225-aa57-48df-bf28-2254ecc4f635-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-sq9mx\" (UID: \"8062d225-aa57-48df-bf28-2254ecc4f635\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sq9mx" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.395876 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/48be8ad8-4c02-4bea-a143-449763b39d54-registry-certificates\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.396004 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb979c55-3027-4d92-94b9-cd17c32e6331-config-volume\") pod \"collect-profiles-29495040-5mkf8\" (UID: \"cb979c55-3027-4d92-94b9-cd17c32e6331\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5mkf8" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.396450 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cc84f60e-094e-4924-b6f1-f0a8ab81aa4e-srv-cert\") pod \"catalog-operator-68c6474976-jzg2m\" (UID: \"cc84f60e-094e-4924-b6f1-f0a8ab81aa4e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzg2m" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.396498 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b632d84-c711-419a-9e24-bdb4c6e9aef6-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-chzp2\" (UID: \"1b632d84-c711-419a-9e24-bdb4c6e9aef6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-chzp2" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.396587 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1d9869e2-6f55-4246-8ed0-b8af9dab3f74-bound-sa-token\") pod \"ingress-operator-5b745b69d9-r6cxt\" (UID: \"1d9869e2-6f55-4246-8ed0-b8af9dab3f74\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r6cxt" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.396633 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6f4x\" (UniqueName: \"kubernetes.io/projected/7a1dfb55-8680-4cbe-bd78-caca2e847caf-kube-api-access-k6f4x\") pod \"router-default-5444994796-lz6mw\" (UID: \"7a1dfb55-8680-4cbe-bd78-caca2e847caf\") " pod="openshift-ingress/router-default-5444994796-lz6mw" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.396675 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/adc3900b-dce0-4da4-bfc2-bca85b2395b2-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pbrmk\" (UID: \"adc3900b-dce0-4da4-bfc2-bca85b2395b2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbrmk" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.396707 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a1dfb55-8680-4cbe-bd78-caca2e847caf-service-ca-bundle\") pod \"router-default-5444994796-lz6mw\" (UID: \"7a1dfb55-8680-4cbe-bd78-caca2e847caf\") " pod="openshift-ingress/router-default-5444994796-lz6mw" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.396732 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/38f843fe-c20b-4bc7-8f45-4ccd4b7be5a5-srv-cert\") pod \"olm-operator-6b444d44fb-ljnh7\" (UID: \"38f843fe-c20b-4bc7-8f45-4ccd4b7be5a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ljnh7" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.396758 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/714cef39-2960-4a25-ac81-a4e65a115eb3-socket-dir\") pod \"csi-hostpathplugin-4nghl\" (UID: \"714cef39-2960-4a25-ac81-a4e65a115eb3\") " pod="hostpath-provisioner/csi-hostpathplugin-4nghl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.396784 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/714cef39-2960-4a25-ac81-a4e65a115eb3-csi-data-dir\") pod \"csi-hostpathplugin-4nghl\" (UID: \"714cef39-2960-4a25-ac81-a4e65a115eb3\") " pod="hostpath-provisioner/csi-hostpathplugin-4nghl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.396828 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cc84f60e-094e-4924-b6f1-f0a8ab81aa4e-profile-collector-cert\") pod \"catalog-operator-68c6474976-jzg2m\" (UID: \"cc84f60e-094e-4924-b6f1-f0a8ab81aa4e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzg2m" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.396877 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n67sj\" (UniqueName: \"kubernetes.io/projected/2fa1ede8-3ea3-421d-929d-f6bf9cc1db0e-kube-api-access-n67sj\") pod \"migrator-59844c95c7-xtzbx\" (UID: \"2fa1ede8-3ea3-421d-929d-f6bf9cc1db0e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xtzbx" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.396907 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f18250a8-66c1-445d-9452-081de13b24f7-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zkbcz\" (UID: \"f18250a8-66c1-445d-9452-081de13b24f7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zkbcz" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.396977 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d9869e2-6f55-4246-8ed0-b8af9dab3f74-metrics-tls\") pod \"ingress-operator-5b745b69d9-r6cxt\" (UID: \"1d9869e2-6f55-4246-8ed0-b8af9dab3f74\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r6cxt" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.397003 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fcaee576-dff0-4a67-a0b1-7347b3030729-config-volume\") pod \"dns-default-jcdhl\" (UID: \"fcaee576-dff0-4a67-a0b1-7347b3030729\") " pod="openshift-dns/dns-default-jcdhl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.397183 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/706713ee-0ea2-4018-847c-ccf3a0fafb1c-signing-key\") pod \"service-ca-9c57cc56f-kfqcf\" (UID: \"706713ee-0ea2-4018-847c-ccf3a0fafb1c\") " pod="openshift-service-ca/service-ca-9c57cc56f-kfqcf" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.397228 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a1dfb55-8680-4cbe-bd78-caca2e847caf-metrics-certs\") pod \"router-default-5444994796-lz6mw\" (UID: \"7a1dfb55-8680-4cbe-bd78-caca2e847caf\") " pod="openshift-ingress/router-default-5444994796-lz6mw" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.397251 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh8t8\" (UniqueName: \"kubernetes.io/projected/77b31235-8b07-4d66-aec8-64e5b7fae08e-kube-api-access-nh8t8\") pod \"service-ca-operator-777779d784-44gfk\" (UID: \"77b31235-8b07-4d66-aec8-64e5b7fae08e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-44gfk" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.397282 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d9869e2-6f55-4246-8ed0-b8af9dab3f74-trusted-ca\") pod \"ingress-operator-5b745b69d9-r6cxt\" (UID: \"1d9869e2-6f55-4246-8ed0-b8af9dab3f74\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r6cxt" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.397300 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b632d84-c711-419a-9e24-bdb4c6e9aef6-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-chzp2\" (UID: \"1b632d84-c711-419a-9e24-bdb4c6e9aef6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-chzp2" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.397465 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a1dfb55-8680-4cbe-bd78-caca2e847caf-service-ca-bundle\") pod \"router-default-5444994796-lz6mw\" (UID: \"7a1dfb55-8680-4cbe-bd78-caca2e847caf\") " pod="openshift-ingress/router-default-5444994796-lz6mw" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.397803 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nzm9\" (UniqueName: \"kubernetes.io/projected/706713ee-0ea2-4018-847c-ccf3a0fafb1c-kube-api-access-8nzm9\") pod \"service-ca-9c57cc56f-kfqcf\" (UID: \"706713ee-0ea2-4018-847c-ccf3a0fafb1c\") " pod="openshift-service-ca/service-ca-9c57cc56f-kfqcf" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.397831 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/80515d06-c09e-4c9d-a90f-43cc84edf4c9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-l2t56\" (UID: \"80515d06-c09e-4c9d-a90f-43cc84edf4c9\") " pod="openshift-marketplace/marketplace-operator-79b997595-l2t56" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.397850 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/706713ee-0ea2-4018-847c-ccf3a0fafb1c-signing-cabundle\") pod \"service-ca-9c57cc56f-kfqcf\" (UID: \"706713ee-0ea2-4018-847c-ccf3a0fafb1c\") " pod="openshift-service-ca/service-ca-9c57cc56f-kfqcf" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.397967 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1fd5b799-74c2-4ffa-b3d9-6745c66ba28f-certs\") pod \"machine-config-server-zxb4v\" (UID: \"1fd5b799-74c2-4ffa-b3d9-6745c66ba28f\") " pod="openshift-machine-config-operator/machine-config-server-zxb4v" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.397998 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh5km\" (UniqueName: \"kubernetes.io/projected/48be8ad8-4c02-4bea-a143-449763b39d54-kube-api-access-wh5km\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.398019 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77b31235-8b07-4d66-aec8-64e5b7fae08e-config\") pod \"service-ca-operator-777779d784-44gfk\" (UID: \"77b31235-8b07-4d66-aec8-64e5b7fae08e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-44gfk" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.399488 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7a1dfb55-8680-4cbe-bd78-caca2e847caf-stats-auth\") pod \"router-default-5444994796-lz6mw\" (UID: \"7a1dfb55-8680-4cbe-bd78-caca2e847caf\") " pod="openshift-ingress/router-default-5444994796-lz6mw" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.399745 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab-apiservice-cert\") pod \"packageserver-d55dfcdfc-v68nn\" (UID: \"2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v68nn" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.399775 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/48be8ad8-4c02-4bea-a143-449763b39d54-installation-pull-secrets\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.399839 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/48be8ad8-4c02-4bea-a143-449763b39d54-ca-trust-extracted\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.399884 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fcaee576-dff0-4a67-a0b1-7347b3030729-metrics-tls\") pod \"dns-default-jcdhl\" (UID: \"fcaee576-dff0-4a67-a0b1-7347b3030729\") " pod="openshift-dns/dns-default-jcdhl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.399890 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cc84f60e-094e-4924-b6f1-f0a8ab81aa4e-srv-cert\") pod \"catalog-operator-68c6474976-jzg2m\" (UID: \"cc84f60e-094e-4924-b6f1-f0a8ab81aa4e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzg2m" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.401780 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02158b16-2eb1-4b8e-b1bb-55285b22d053-cert\") pod \"ingress-canary-vhtdt\" (UID: \"02158b16-2eb1-4b8e-b1bb-55285b22d053\") " pod="openshift-ingress-canary/ingress-canary-vhtdt" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.401824 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.401906 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cc84f60e-094e-4924-b6f1-f0a8ab81aa4e-profile-collector-cert\") pod \"catalog-operator-68c6474976-jzg2m\" (UID: \"cc84f60e-094e-4924-b6f1-f0a8ab81aa4e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzg2m" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.402098 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g6v4\" (UniqueName: \"kubernetes.io/projected/adc3900b-dce0-4da4-bfc2-bca85b2395b2-kube-api-access-4g6v4\") pod \"package-server-manager-789f6589d5-pbrmk\" (UID: \"adc3900b-dce0-4da4-bfc2-bca85b2395b2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbrmk" Jan 29 16:12:16 crc kubenswrapper[4714]: E0129 16:12:16.402197 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:16.902166629 +0000 UTC m=+143.422667749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.402207 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a1dfb55-8680-4cbe-bd78-caca2e847caf-metrics-certs\") pod \"router-default-5444994796-lz6mw\" (UID: \"7a1dfb55-8680-4cbe-bd78-caca2e847caf\") " pod="openshift-ingress/router-default-5444994796-lz6mw" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.402278 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/714cef39-2960-4a25-ac81-a4e65a115eb3-mountpoint-dir\") pod \"csi-hostpathplugin-4nghl\" (UID: \"714cef39-2960-4a25-ac81-a4e65a115eb3\") " pod="hostpath-provisioner/csi-hostpathplugin-4nghl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.402969 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b7cf219f-4e80-47fc-b349-ea5c7eab6d9d-metrics-tls\") pod \"dns-operator-744455d44c-sv7xw\" (UID: \"b7cf219f-4e80-47fc-b349-ea5c7eab6d9d\") " pod="openshift-dns-operator/dns-operator-744455d44c-sv7xw" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.403441 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7a1dfb55-8680-4cbe-bd78-caca2e847caf-default-certificate\") pod \"router-default-5444994796-lz6mw\" (UID: \"7a1dfb55-8680-4cbe-bd78-caca2e847caf\") " pod="openshift-ingress/router-default-5444994796-lz6mw" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.403507 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/714cef39-2960-4a25-ac81-a4e65a115eb3-registration-dir\") pod \"csi-hostpathplugin-4nghl\" (UID: \"714cef39-2960-4a25-ac81-a4e65a115eb3\") " pod="hostpath-provisioner/csi-hostpathplugin-4nghl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.403541 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77b31235-8b07-4d66-aec8-64e5b7fae08e-serving-cert\") pod \"service-ca-operator-777779d784-44gfk\" (UID: \"77b31235-8b07-4d66-aec8-64e5b7fae08e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-44gfk" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.404085 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfkhr\" (UniqueName: \"kubernetes.io/projected/1d9869e2-6f55-4246-8ed0-b8af9dab3f74-kube-api-access-gfkhr\") pod \"ingress-operator-5b745b69d9-r6cxt\" (UID: \"1d9869e2-6f55-4246-8ed0-b8af9dab3f74\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r6cxt" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.404160 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80515d06-c09e-4c9d-a90f-43cc84edf4c9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-l2t56\" (UID: \"80515d06-c09e-4c9d-a90f-43cc84edf4c9\") " pod="openshift-marketplace/marketplace-operator-79b997595-l2t56" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.404544 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsbnv\" (UniqueName: \"kubernetes.io/projected/02158b16-2eb1-4b8e-b1bb-55285b22d053-kube-api-access-qsbnv\") pod \"ingress-canary-vhtdt\" (UID: \"02158b16-2eb1-4b8e-b1bb-55285b22d053\") " pod="openshift-ingress-canary/ingress-canary-vhtdt" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.404598 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfc66\" (UniqueName: \"kubernetes.io/projected/80515d06-c09e-4c9d-a90f-43cc84edf4c9-kube-api-access-xfc66\") pod \"marketplace-operator-79b997595-l2t56\" (UID: \"80515d06-c09e-4c9d-a90f-43cc84edf4c9\") " pod="openshift-marketplace/marketplace-operator-79b997595-l2t56" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.404663 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab-webhook-cert\") pod \"packageserver-d55dfcdfc-v68nn\" (UID: \"2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v68nn" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.405770 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80515d06-c09e-4c9d-a90f-43cc84edf4c9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-l2t56\" (UID: \"80515d06-c09e-4c9d-a90f-43cc84edf4c9\") " pod="openshift-marketplace/marketplace-operator-79b997595-l2t56" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.405859 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b7cf219f-4e80-47fc-b349-ea5c7eab6d9d-metrics-tls\") pod \"dns-operator-744455d44c-sv7xw\" (UID: \"b7cf219f-4e80-47fc-b349-ea5c7eab6d9d\") " pod="openshift-dns-operator/dns-operator-744455d44c-sv7xw" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.406748 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/80515d06-c09e-4c9d-a90f-43cc84edf4c9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-l2t56\" (UID: \"80515d06-c09e-4c9d-a90f-43cc84edf4c9\") " pod="openshift-marketplace/marketplace-operator-79b997595-l2t56" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.420698 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfz7c\" (UniqueName: \"kubernetes.io/projected/cc84f60e-094e-4924-b6f1-f0a8ab81aa4e-kube-api-access-zfz7c\") pod \"catalog-operator-68c6474976-jzg2m\" (UID: \"cc84f60e-094e-4924-b6f1-f0a8ab81aa4e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzg2m" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.429824 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nf7jb" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.442793 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjnvs\" (UniqueName: \"kubernetes.io/projected/b7cf219f-4e80-47fc-b349-ea5c7eab6d9d-kube-api-access-wjnvs\") pod \"dns-operator-744455d44c-sv7xw\" (UID: \"b7cf219f-4e80-47fc-b349-ea5c7eab6d9d\") " pod="openshift-dns-operator/dns-operator-744455d44c-sv7xw" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.443511 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw"] Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.454706 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-sv7xw" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.454822 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7a1dfb55-8680-4cbe-bd78-caca2e847caf-default-certificate\") pod \"router-default-5444994796-lz6mw\" (UID: \"7a1dfb55-8680-4cbe-bd78-caca2e847caf\") " pod="openshift-ingress/router-default-5444994796-lz6mw" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.456071 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/48be8ad8-4c02-4bea-a143-449763b39d54-ca-trust-extracted\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.456855 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48be8ad8-4c02-4bea-a143-449763b39d54-trusted-ca\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.458226 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/48be8ad8-4c02-4bea-a143-449763b39d54-installation-pull-secrets\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.458362 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/48be8ad8-4c02-4bea-a143-449763b39d54-registry-tls\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.466997 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d9869e2-6f55-4246-8ed0-b8af9dab3f74-trusted-ca\") pod \"ingress-operator-5b745b69d9-r6cxt\" (UID: \"1d9869e2-6f55-4246-8ed0-b8af9dab3f74\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r6cxt" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.469660 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d9869e2-6f55-4246-8ed0-b8af9dab3f74-metrics-tls\") pod \"ingress-operator-5b745b69d9-r6cxt\" (UID: \"1d9869e2-6f55-4246-8ed0-b8af9dab3f74\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r6cxt" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.470358 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/48be8ad8-4c02-4bea-a143-449763b39d54-bound-sa-token\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:16 crc kubenswrapper[4714]: W0129 16:12:16.472352 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f71ba3e_c687_4ff7_9475_1e18ded764f6.slice/crio-91c8d3bea295ee12e93d224413f942892c3f7aeff1c8def01e65177720debd20 WatchSource:0}: Error finding container 91c8d3bea295ee12e93d224413f942892c3f7aeff1c8def01e65177720debd20: Status 404 returned error can't find the container with id 91c8d3bea295ee12e93d224413f942892c3f7aeff1c8def01e65177720debd20 Jan 29 16:12:16 crc kubenswrapper[4714]: W0129 16:12:16.476073 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0387af3d_8796_46b0_9282_9ecbda7fe3a7.slice/crio-aeff327d5990523e2cb090f23196b9fa5cb535b0bfdecedfb353339232ef1474 WatchSource:0}: Error finding container aeff327d5990523e2cb090f23196b9fa5cb535b0bfdecedfb353339232ef1474: Status 404 returned error can't find the container with id aeff327d5990523e2cb090f23196b9fa5cb535b0bfdecedfb353339232ef1474 Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.483060 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1d9869e2-6f55-4246-8ed0-b8af9dab3f74-bound-sa-token\") pod \"ingress-operator-5b745b69d9-r6cxt\" (UID: \"1d9869e2-6f55-4246-8ed0-b8af9dab3f74\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r6cxt" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.505444 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:16 crc kubenswrapper[4714]: E0129 16:12:16.505585 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:17.00556165 +0000 UTC m=+143.526062760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.505711 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f18250a8-66c1-445d-9452-081de13b24f7-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zkbcz\" (UID: \"f18250a8-66c1-445d-9452-081de13b24f7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zkbcz" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.505743 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fcaee576-dff0-4a67-a0b1-7347b3030729-config-volume\") pod \"dns-default-jcdhl\" (UID: \"fcaee576-dff0-4a67-a0b1-7347b3030729\") " pod="openshift-dns/dns-default-jcdhl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.505769 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/706713ee-0ea2-4018-847c-ccf3a0fafb1c-signing-key\") pod \"service-ca-9c57cc56f-kfqcf\" (UID: \"706713ee-0ea2-4018-847c-ccf3a0fafb1c\") " pod="openshift-service-ca/service-ca-9c57cc56f-kfqcf" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.505793 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh8t8\" (UniqueName: \"kubernetes.io/projected/77b31235-8b07-4d66-aec8-64e5b7fae08e-kube-api-access-nh8t8\") pod \"service-ca-operator-777779d784-44gfk\" (UID: \"77b31235-8b07-4d66-aec8-64e5b7fae08e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-44gfk" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.505817 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b632d84-c711-419a-9e24-bdb4c6e9aef6-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-chzp2\" (UID: \"1b632d84-c711-419a-9e24-bdb4c6e9aef6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-chzp2" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.505839 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nzm9\" (UniqueName: \"kubernetes.io/projected/706713ee-0ea2-4018-847c-ccf3a0fafb1c-kube-api-access-8nzm9\") pod \"service-ca-9c57cc56f-kfqcf\" (UID: \"706713ee-0ea2-4018-847c-ccf3a0fafb1c\") " pod="openshift-service-ca/service-ca-9c57cc56f-kfqcf" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.505861 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/706713ee-0ea2-4018-847c-ccf3a0fafb1c-signing-cabundle\") pod \"service-ca-9c57cc56f-kfqcf\" (UID: \"706713ee-0ea2-4018-847c-ccf3a0fafb1c\") " pod="openshift-service-ca/service-ca-9c57cc56f-kfqcf" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.505890 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1fd5b799-74c2-4ffa-b3d9-6745c66ba28f-certs\") pod \"machine-config-server-zxb4v\" (UID: \"1fd5b799-74c2-4ffa-b3d9-6745c66ba28f\") " pod="openshift-machine-config-operator/machine-config-server-zxb4v" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.505912 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77b31235-8b07-4d66-aec8-64e5b7fae08e-config\") pod \"service-ca-operator-777779d784-44gfk\" (UID: \"77b31235-8b07-4d66-aec8-64e5b7fae08e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-44gfk" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.505939 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab-apiservice-cert\") pod \"packageserver-d55dfcdfc-v68nn\" (UID: \"2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v68nn" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.505976 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fcaee576-dff0-4a67-a0b1-7347b3030729-metrics-tls\") pod \"dns-default-jcdhl\" (UID: \"fcaee576-dff0-4a67-a0b1-7347b3030729\") " pod="openshift-dns/dns-default-jcdhl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506009 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02158b16-2eb1-4b8e-b1bb-55285b22d053-cert\") pod \"ingress-canary-vhtdt\" (UID: \"02158b16-2eb1-4b8e-b1bb-55285b22d053\") " pod="openshift-ingress-canary/ingress-canary-vhtdt" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506036 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506059 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g6v4\" (UniqueName: \"kubernetes.io/projected/adc3900b-dce0-4da4-bfc2-bca85b2395b2-kube-api-access-4g6v4\") pod \"package-server-manager-789f6589d5-pbrmk\" (UID: \"adc3900b-dce0-4da4-bfc2-bca85b2395b2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbrmk" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506092 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/714cef39-2960-4a25-ac81-a4e65a115eb3-mountpoint-dir\") pod \"csi-hostpathplugin-4nghl\" (UID: \"714cef39-2960-4a25-ac81-a4e65a115eb3\") " pod="hostpath-provisioner/csi-hostpathplugin-4nghl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506130 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77b31235-8b07-4d66-aec8-64e5b7fae08e-serving-cert\") pod \"service-ca-operator-777779d784-44gfk\" (UID: \"77b31235-8b07-4d66-aec8-64e5b7fae08e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-44gfk" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506152 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/714cef39-2960-4a25-ac81-a4e65a115eb3-registration-dir\") pod \"csi-hostpathplugin-4nghl\" (UID: \"714cef39-2960-4a25-ac81-a4e65a115eb3\") " pod="hostpath-provisioner/csi-hostpathplugin-4nghl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506186 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsbnv\" (UniqueName: \"kubernetes.io/projected/02158b16-2eb1-4b8e-b1bb-55285b22d053-kube-api-access-qsbnv\") pod \"ingress-canary-vhtdt\" (UID: \"02158b16-2eb1-4b8e-b1bb-55285b22d053\") " pod="openshift-ingress-canary/ingress-canary-vhtdt" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506221 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab-webhook-cert\") pod \"packageserver-d55dfcdfc-v68nn\" (UID: \"2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v68nn" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506258 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b632d84-c711-419a-9e24-bdb4c6e9aef6-config\") pod \"kube-apiserver-operator-766d6c64bb-chzp2\" (UID: \"1b632d84-c711-419a-9e24-bdb4c6e9aef6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-chzp2" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506284 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/38f843fe-c20b-4bc7-8f45-4ccd4b7be5a5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ljnh7\" (UID: \"38f843fe-c20b-4bc7-8f45-4ccd4b7be5a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ljnh7" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506308 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45vkr\" (UniqueName: \"kubernetes.io/projected/fcaee576-dff0-4a67-a0b1-7347b3030729-kube-api-access-45vkr\") pod \"dns-default-jcdhl\" (UID: \"fcaee576-dff0-4a67-a0b1-7347b3030729\") " pod="openshift-dns/dns-default-jcdhl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506330 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/714cef39-2960-4a25-ac81-a4e65a115eb3-plugins-dir\") pod \"csi-hostpathplugin-4nghl\" (UID: \"714cef39-2960-4a25-ac81-a4e65a115eb3\") " pod="hostpath-provisioner/csi-hostpathplugin-4nghl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506355 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fzjx\" (UniqueName: \"kubernetes.io/projected/cb979c55-3027-4d92-94b9-cd17c32e6331-kube-api-access-7fzjx\") pod \"collect-profiles-29495040-5mkf8\" (UID: \"cb979c55-3027-4d92-94b9-cd17c32e6331\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5mkf8" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506378 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqcjl\" (UniqueName: \"kubernetes.io/projected/38f843fe-c20b-4bc7-8f45-4ccd4b7be5a5-kube-api-access-fqcjl\") pod \"olm-operator-6b444d44fb-ljnh7\" (UID: \"38f843fe-c20b-4bc7-8f45-4ccd4b7be5a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ljnh7" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506399 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb979c55-3027-4d92-94b9-cd17c32e6331-secret-volume\") pod \"collect-profiles-29495040-5mkf8\" (UID: \"cb979c55-3027-4d92-94b9-cd17c32e6331\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5mkf8" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506424 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf5f9\" (UniqueName: \"kubernetes.io/projected/f18250a8-66c1-445d-9452-081de13b24f7-kube-api-access-lf5f9\") pod \"multus-admission-controller-857f4d67dd-zkbcz\" (UID: \"f18250a8-66c1-445d-9452-081de13b24f7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zkbcz" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506448 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfkhq\" (UniqueName: \"kubernetes.io/projected/1fd5b799-74c2-4ffa-b3d9-6745c66ba28f-kube-api-access-vfkhq\") pod \"machine-config-server-zxb4v\" (UID: \"1fd5b799-74c2-4ffa-b3d9-6745c66ba28f\") " pod="openshift-machine-config-operator/machine-config-server-zxb4v" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506478 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6k2j\" (UniqueName: \"kubernetes.io/projected/8062d225-aa57-48df-bf28-2254ecc4f635-kube-api-access-z6k2j\") pod \"control-plane-machine-set-operator-78cbb6b69f-sq9mx\" (UID: \"8062d225-aa57-48df-bf28-2254ecc4f635\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sq9mx" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506500 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab-tmpfs\") pod \"packageserver-d55dfcdfc-v68nn\" (UID: \"2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v68nn" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506523 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1fd5b799-74c2-4ffa-b3d9-6745c66ba28f-node-bootstrap-token\") pod \"machine-config-server-zxb4v\" (UID: \"1fd5b799-74c2-4ffa-b3d9-6745c66ba28f\") " pod="openshift-machine-config-operator/machine-config-server-zxb4v" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506545 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p4cl\" (UniqueName: \"kubernetes.io/projected/2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab-kube-api-access-4p4cl\") pod \"packageserver-d55dfcdfc-v68nn\" (UID: \"2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v68nn" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506569 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8062d225-aa57-48df-bf28-2254ecc4f635-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-sq9mx\" (UID: \"8062d225-aa57-48df-bf28-2254ecc4f635\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sq9mx" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506594 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwsfq\" (UniqueName: \"kubernetes.io/projected/714cef39-2960-4a25-ac81-a4e65a115eb3-kube-api-access-xwsfq\") pod \"csi-hostpathplugin-4nghl\" (UID: \"714cef39-2960-4a25-ac81-a4e65a115eb3\") " pod="hostpath-provisioner/csi-hostpathplugin-4nghl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506615 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb979c55-3027-4d92-94b9-cd17c32e6331-config-volume\") pod \"collect-profiles-29495040-5mkf8\" (UID: \"cb979c55-3027-4d92-94b9-cd17c32e6331\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5mkf8" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506657 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b632d84-c711-419a-9e24-bdb4c6e9aef6-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-chzp2\" (UID: \"1b632d84-c711-419a-9e24-bdb4c6e9aef6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-chzp2" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506688 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/adc3900b-dce0-4da4-bfc2-bca85b2395b2-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pbrmk\" (UID: \"adc3900b-dce0-4da4-bfc2-bca85b2395b2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbrmk" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506712 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/714cef39-2960-4a25-ac81-a4e65a115eb3-socket-dir\") pod \"csi-hostpathplugin-4nghl\" (UID: \"714cef39-2960-4a25-ac81-a4e65a115eb3\") " pod="hostpath-provisioner/csi-hostpathplugin-4nghl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506736 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/38f843fe-c20b-4bc7-8f45-4ccd4b7be5a5-srv-cert\") pod \"olm-operator-6b444d44fb-ljnh7\" (UID: \"38f843fe-c20b-4bc7-8f45-4ccd4b7be5a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ljnh7" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506760 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/714cef39-2960-4a25-ac81-a4e65a115eb3-csi-data-dir\") pod \"csi-hostpathplugin-4nghl\" (UID: \"714cef39-2960-4a25-ac81-a4e65a115eb3\") " pod="hostpath-provisioner/csi-hostpathplugin-4nghl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.506801 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fcaee576-dff0-4a67-a0b1-7347b3030729-config-volume\") pod \"dns-default-jcdhl\" (UID: \"fcaee576-dff0-4a67-a0b1-7347b3030729\") " pod="openshift-dns/dns-default-jcdhl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.507191 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/714cef39-2960-4a25-ac81-a4e65a115eb3-plugins-dir\") pod \"csi-hostpathplugin-4nghl\" (UID: \"714cef39-2960-4a25-ac81-a4e65a115eb3\") " pod="hostpath-provisioner/csi-hostpathplugin-4nghl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.507477 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/714cef39-2960-4a25-ac81-a4e65a115eb3-mountpoint-dir\") pod \"csi-hostpathplugin-4nghl\" (UID: \"714cef39-2960-4a25-ac81-a4e65a115eb3\") " pod="hostpath-provisioner/csi-hostpathplugin-4nghl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.507916 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/706713ee-0ea2-4018-847c-ccf3a0fafb1c-signing-cabundle\") pod \"service-ca-9c57cc56f-kfqcf\" (UID: \"706713ee-0ea2-4018-847c-ccf3a0fafb1c\") " pod="openshift-service-ca/service-ca-9c57cc56f-kfqcf" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.508527 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/714cef39-2960-4a25-ac81-a4e65a115eb3-socket-dir\") pod \"csi-hostpathplugin-4nghl\" (UID: \"714cef39-2960-4a25-ac81-a4e65a115eb3\") " pod="hostpath-provisioner/csi-hostpathplugin-4nghl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.508687 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb979c55-3027-4d92-94b9-cd17c32e6331-config-volume\") pod \"collect-profiles-29495040-5mkf8\" (UID: \"cb979c55-3027-4d92-94b9-cd17c32e6331\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5mkf8" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.510049 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77b31235-8b07-4d66-aec8-64e5b7fae08e-config\") pod \"service-ca-operator-777779d784-44gfk\" (UID: \"77b31235-8b07-4d66-aec8-64e5b7fae08e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-44gfk" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.510453 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/714cef39-2960-4a25-ac81-a4e65a115eb3-registration-dir\") pod \"csi-hostpathplugin-4nghl\" (UID: \"714cef39-2960-4a25-ac81-a4e65a115eb3\") " pod="hostpath-provisioner/csi-hostpathplugin-4nghl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.510973 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab-tmpfs\") pod \"packageserver-d55dfcdfc-v68nn\" (UID: \"2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v68nn" Jan 29 16:12:16 crc kubenswrapper[4714]: E0129 16:12:16.510925 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:17.010906344 +0000 UTC m=+143.531407464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.511106 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/714cef39-2960-4a25-ac81-a4e65a115eb3-csi-data-dir\") pod \"csi-hostpathplugin-4nghl\" (UID: \"714cef39-2960-4a25-ac81-a4e65a115eb3\") " pod="hostpath-provisioner/csi-hostpathplugin-4nghl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.511268 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/706713ee-0ea2-4018-847c-ccf3a0fafb1c-signing-key\") pod \"service-ca-9c57cc56f-kfqcf\" (UID: \"706713ee-0ea2-4018-847c-ccf3a0fafb1c\") " pod="openshift-service-ca/service-ca-9c57cc56f-kfqcf" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.511292 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b632d84-c711-419a-9e24-bdb4c6e9aef6-config\") pod \"kube-apiserver-operator-766d6c64bb-chzp2\" (UID: \"1b632d84-c711-419a-9e24-bdb4c6e9aef6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-chzp2" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.511539 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fcaee576-dff0-4a67-a0b1-7347b3030729-metrics-tls\") pod \"dns-default-jcdhl\" (UID: \"fcaee576-dff0-4a67-a0b1-7347b3030729\") " pod="openshift-dns/dns-default-jcdhl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.511471 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b632d84-c711-419a-9e24-bdb4c6e9aef6-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-chzp2\" (UID: \"1b632d84-c711-419a-9e24-bdb4c6e9aef6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-chzp2" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.511711 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f18250a8-66c1-445d-9452-081de13b24f7-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zkbcz\" (UID: \"f18250a8-66c1-445d-9452-081de13b24f7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zkbcz" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.512069 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzg2m" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.512256 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77b31235-8b07-4d66-aec8-64e5b7fae08e-serving-cert\") pod \"service-ca-operator-777779d784-44gfk\" (UID: \"77b31235-8b07-4d66-aec8-64e5b7fae08e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-44gfk" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.513174 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/38f843fe-c20b-4bc7-8f45-4ccd4b7be5a5-srv-cert\") pod \"olm-operator-6b444d44fb-ljnh7\" (UID: \"38f843fe-c20b-4bc7-8f45-4ccd4b7be5a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ljnh7" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.513423 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8062d225-aa57-48df-bf28-2254ecc4f635-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-sq9mx\" (UID: \"8062d225-aa57-48df-bf28-2254ecc4f635\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sq9mx" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.513617 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1fd5b799-74c2-4ffa-b3d9-6745c66ba28f-certs\") pod \"machine-config-server-zxb4v\" (UID: \"1fd5b799-74c2-4ffa-b3d9-6745c66ba28f\") " pod="openshift-machine-config-operator/machine-config-server-zxb4v" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.514183 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6f4x\" (UniqueName: \"kubernetes.io/projected/7a1dfb55-8680-4cbe-bd78-caca2e847caf-kube-api-access-k6f4x\") pod \"router-default-5444994796-lz6mw\" (UID: \"7a1dfb55-8680-4cbe-bd78-caca2e847caf\") " pod="openshift-ingress/router-default-5444994796-lz6mw" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.514385 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1fd5b799-74c2-4ffa-b3d9-6745c66ba28f-node-bootstrap-token\") pod \"machine-config-server-zxb4v\" (UID: \"1fd5b799-74c2-4ffa-b3d9-6745c66ba28f\") " pod="openshift-machine-config-operator/machine-config-server-zxb4v" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.514593 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02158b16-2eb1-4b8e-b1bb-55285b22d053-cert\") pod \"ingress-canary-vhtdt\" (UID: \"02158b16-2eb1-4b8e-b1bb-55285b22d053\") " pod="openshift-ingress-canary/ingress-canary-vhtdt" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.514691 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/38f843fe-c20b-4bc7-8f45-4ccd4b7be5a5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ljnh7\" (UID: \"38f843fe-c20b-4bc7-8f45-4ccd4b7be5a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ljnh7" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.516250 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab-webhook-cert\") pod \"packageserver-d55dfcdfc-v68nn\" (UID: \"2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v68nn" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.516875 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/adc3900b-dce0-4da4-bfc2-bca85b2395b2-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pbrmk\" (UID: \"adc3900b-dce0-4da4-bfc2-bca85b2395b2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbrmk" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.517596 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab-apiservice-cert\") pod \"packageserver-d55dfcdfc-v68nn\" (UID: \"2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v68nn" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.544927 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n67sj\" (UniqueName: \"kubernetes.io/projected/2fa1ede8-3ea3-421d-929d-f6bf9cc1db0e-kube-api-access-n67sj\") pod \"migrator-59844c95c7-xtzbx\" (UID: \"2fa1ede8-3ea3-421d-929d-f6bf9cc1db0e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xtzbx" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.577926 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh5km\" (UniqueName: \"kubernetes.io/projected/48be8ad8-4c02-4bea-a143-449763b39d54-kube-api-access-wh5km\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.586675 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfkhr\" (UniqueName: \"kubernetes.io/projected/1d9869e2-6f55-4246-8ed0-b8af9dab3f74-kube-api-access-gfkhr\") pod \"ingress-operator-5b745b69d9-r6cxt\" (UID: \"1d9869e2-6f55-4246-8ed0-b8af9dab3f74\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r6cxt" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.607555 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:16 crc kubenswrapper[4714]: E0129 16:12:16.607769 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:17.107752475 +0000 UTC m=+143.628253585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.608120 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfc66\" (UniqueName: \"kubernetes.io/projected/80515d06-c09e-4c9d-a90f-43cc84edf4c9-kube-api-access-xfc66\") pod \"marketplace-operator-79b997595-l2t56\" (UID: \"80515d06-c09e-4c9d-a90f-43cc84edf4c9\") " pod="openshift-marketplace/marketplace-operator-79b997595-l2t56" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.608234 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:16 crc kubenswrapper[4714]: E0129 16:12:16.609175 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:17.108637612 +0000 UTC m=+143.629138732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.624694 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6jl75"] Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.628200 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99knh"] Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.643913 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b632d84-c711-419a-9e24-bdb4c6e9aef6-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-chzp2\" (UID: \"1b632d84-c711-419a-9e24-bdb4c6e9aef6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-chzp2" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.665273 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fzjx\" (UniqueName: \"kubernetes.io/projected/cb979c55-3027-4d92-94b9-cd17c32e6331-kube-api-access-7fzjx\") pod \"collect-profiles-29495040-5mkf8\" (UID: \"cb979c55-3027-4d92-94b9-cd17c32e6331\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5mkf8" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.669766 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-z4h55"] Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.682468 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqcjl\" (UniqueName: \"kubernetes.io/projected/38f843fe-c20b-4bc7-8f45-4ccd4b7be5a5-kube-api-access-fqcjl\") pod \"olm-operator-6b444d44fb-ljnh7\" (UID: \"38f843fe-c20b-4bc7-8f45-4ccd4b7be5a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ljnh7" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.701009 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4dn69"] Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.709068 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:16 crc kubenswrapper[4714]: E0129 16:12:16.709177 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:17.209155996 +0000 UTC m=+143.729657116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.709463 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:16 crc kubenswrapper[4714]: E0129 16:12:16.709982 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:17.209973881 +0000 UTC m=+143.730475001 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.728213 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nzm9\" (UniqueName: \"kubernetes.io/projected/706713ee-0ea2-4018-847c-ccf3a0fafb1c-kube-api-access-8nzm9\") pod \"service-ca-9c57cc56f-kfqcf\" (UID: \"706713ee-0ea2-4018-847c-ccf3a0fafb1c\") " pod="openshift-service-ca/service-ca-9c57cc56f-kfqcf" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.736992 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-lz6mw" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.745651 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwsfq\" (UniqueName: \"kubernetes.io/projected/714cef39-2960-4a25-ac81-a4e65a115eb3-kube-api-access-xwsfq\") pod \"csi-hostpathplugin-4nghl\" (UID: \"714cef39-2960-4a25-ac81-a4e65a115eb3\") " pod="hostpath-provisioner/csi-hostpathplugin-4nghl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.762231 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r6cxt" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.763905 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh8t8\" (UniqueName: \"kubernetes.io/projected/77b31235-8b07-4d66-aec8-64e5b7fae08e-kube-api-access-nh8t8\") pod \"service-ca-operator-777779d784-44gfk\" (UID: \"77b31235-8b07-4d66-aec8-64e5b7fae08e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-44gfk" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.768456 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6k2j\" (UniqueName: \"kubernetes.io/projected/8062d225-aa57-48df-bf28-2254ecc4f635-kube-api-access-z6k2j\") pod \"control-plane-machine-set-operator-78cbb6b69f-sq9mx\" (UID: \"8062d225-aa57-48df-bf28-2254ecc4f635\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sq9mx" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.782579 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-l2t56" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.785146 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf5f9\" (UniqueName: \"kubernetes.io/projected/f18250a8-66c1-445d-9452-081de13b24f7-kube-api-access-lf5f9\") pod \"multus-admission-controller-857f4d67dd-zkbcz\" (UID: \"f18250a8-66c1-445d-9452-081de13b24f7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zkbcz" Jan 29 16:12:16 crc kubenswrapper[4714]: W0129 16:12:16.789538 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfb0bd22_cbd8_4ce8_a4f6_86a16dcdeb92.slice/crio-157fd5666164f797770cad3af5809cb593dc5c7c6941d71c40b7576dc2323486 WatchSource:0}: Error finding container 157fd5666164f797770cad3af5809cb593dc5c7c6941d71c40b7576dc2323486: Status 404 returned error can't find the container with id 157fd5666164f797770cad3af5809cb593dc5c7c6941d71c40b7576dc2323486 Jan 29 16:12:16 crc kubenswrapper[4714]: W0129 16:12:16.791929 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6de35940_bef4_4dfa_9a83_08ba29d73399.slice/crio-bab710f268cee7a17d81afe5168b5be33ec92cbaa20fae67c86001bda19ec16b WatchSource:0}: Error finding container bab710f268cee7a17d81afe5168b5be33ec92cbaa20fae67c86001bda19ec16b: Status 404 returned error can't find the container with id bab710f268cee7a17d81afe5168b5be33ec92cbaa20fae67c86001bda19ec16b Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.792117 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xtzbx" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.810252 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:16 crc kubenswrapper[4714]: E0129 16:12:16.810854 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:17.310818275 +0000 UTC m=+143.831319415 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.812610 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/48be8ad8-4c02-4bea-a143-449763b39d54-registry-certificates\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.812790 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-chzp2" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.814562 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfkhq\" (UniqueName: \"kubernetes.io/projected/1fd5b799-74c2-4ffa-b3d9-6745c66ba28f-kube-api-access-vfkhq\") pod \"machine-config-server-zxb4v\" (UID: \"1fd5b799-74c2-4ffa-b3d9-6745c66ba28f\") " pod="openshift-machine-config-operator/machine-config-server-zxb4v" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.821776 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb979c55-3027-4d92-94b9-cd17c32e6331-secret-volume\") pod \"collect-profiles-29495040-5mkf8\" (UID: \"cb979c55-3027-4d92-94b9-cd17c32e6331\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5mkf8" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.822043 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sq9mx" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.826804 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g6v4\" (UniqueName: \"kubernetes.io/projected/adc3900b-dce0-4da4-bfc2-bca85b2395b2-kube-api-access-4g6v4\") pod \"package-server-manager-789f6589d5-pbrmk\" (UID: \"adc3900b-dce0-4da4-bfc2-bca85b2395b2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbrmk" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.829051 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-44gfk" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.837322 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zkbcz" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.852269 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p4cl\" (UniqueName: \"kubernetes.io/projected/2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab-kube-api-access-4p4cl\") pod \"packageserver-d55dfcdfc-v68nn\" (UID: \"2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v68nn" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.852979 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5mkf8" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.865523 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbrmk" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.880802 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ljnh7" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.894167 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsbnv\" (UniqueName: \"kubernetes.io/projected/02158b16-2eb1-4b8e-b1bb-55285b22d053-kube-api-access-qsbnv\") pod \"ingress-canary-vhtdt\" (UID: \"02158b16-2eb1-4b8e-b1bb-55285b22d053\") " pod="openshift-ingress-canary/ingress-canary-vhtdt" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.895737 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-kfqcf" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.903414 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vhtdt" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.913559 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:16 crc kubenswrapper[4714]: E0129 16:12:16.914020 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:17.414003379 +0000 UTC m=+143.934504499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.921539 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45vkr\" (UniqueName: \"kubernetes.io/projected/fcaee576-dff0-4a67-a0b1-7347b3030729-kube-api-access-45vkr\") pod \"dns-default-jcdhl\" (UID: \"fcaee576-dff0-4a67-a0b1-7347b3030729\") " pod="openshift-dns/dns-default-jcdhl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.921881 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4nghl" Jan 29 16:12:16 crc kubenswrapper[4714]: I0129 16:12:16.929284 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-zxb4v" Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.008007 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" event={"ID":"3c2d0611-58f8-4a7e-8280-361c80d62802","Type":"ContainerStarted","Data":"997bb46f3e8548114daabdb0676e47c164f03b6651e1e3ef03b31f66106dbebd"} Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.014864 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:17 crc kubenswrapper[4714]: E0129 16:12:17.015339 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:17.515320298 +0000 UTC m=+144.035821418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.016023 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" event={"ID":"c779f8ba-7614-49f1-be6d-a9e316ec59ba","Type":"ContainerStarted","Data":"3152c7a0341e5d50a15d3a5dff1e3fe3b0fb3f928bd969fbb893bfd4b05c9599"} Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.016899 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-z4h55" event={"ID":"bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92","Type":"ContainerStarted","Data":"157fd5666164f797770cad3af5809cb593dc5c7c6941d71c40b7576dc2323486"} Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.017826 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" event={"ID":"832097a5-4691-42b6-99cc-38679071d5ee","Type":"ContainerStarted","Data":"3a5ee9422c0e8f2bda4f13b1ec7a93ce78a161df42fc1dddfe6f8337aed30775"} Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.025444 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jb6jw" event={"ID":"b25d77ec-57de-4c2a-b534-e98bf149b92a","Type":"ContainerStarted","Data":"f4d02887cf93fa7440522b162f0c7f7416034125a8251d52f88c2abe6f4c8b5d"} Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.031701 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-kvp9d" event={"ID":"ad3c7510-ccc3-453a-91ae-b1f2cf88d2ec","Type":"ContainerStarted","Data":"fac742e2689e1006f8d65c8e994ad3939bd198f2f9224e4d331d2df4d5693fc3"} Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.047610 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dwsm5" event={"ID":"8f71ba3e-c687-4ff7-9475-1e18ded764f6","Type":"ContainerStarted","Data":"91c8d3bea295ee12e93d224413f942892c3f7aeff1c8def01e65177720debd20"} Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.049226 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mrprd"] Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.050222 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcj84" event={"ID":"eacb9f84-018a-4f64-b211-c9bedce50b9e","Type":"ContainerStarted","Data":"ace923f68ad29b1636a1fe40bfc3ee570490853bcb4fd62025c25c9b78f49d58"} Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.052671 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6jl75" event={"ID":"99bab267-639b-48b1-abc4-8c0373200a39","Type":"ContainerStarted","Data":"0e25e4179fbb1ee7f25b933d8c5c0910ffc2e04351daf128945e263f520c2e59"} Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.055139 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cp5md" event={"ID":"0387af3d-8796-46b0-9282-9ecbda7fe3a7","Type":"ContainerStarted","Data":"aeff327d5990523e2cb090f23196b9fa5cb535b0bfdecedfb353339232ef1474"} Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.056683 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fn75b" event={"ID":"42b66dc3-a385-4350-a943-50f062da35f7","Type":"ContainerStarted","Data":"33cc048aa8b0ae0f2921ff20c63d84f7536484b13e9c5a3fee51211e6060d16e"} Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.059676 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nh2m9" event={"ID":"97cfbecd-36ef-409b-94e9-f607a1fa2c42","Type":"ContainerStarted","Data":"c4aa9f52cc02c48d04509bb33a37850911598242997e7951d059e039ac5a5e8a"} Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.068203 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw" event={"ID":"fbfdd647-1d64-4d35-9af2-6dee52b4c860","Type":"ContainerStarted","Data":"79940598fef6f2445dc05d94ab28a7d984953a342201b3331c2b27e4796135a0"} Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.071346 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-m2g9h" event={"ID":"0e2a789d-6a90-4d60-881e-9562cd92e0a7","Type":"ContainerStarted","Data":"a924f3005f8d63da614824356d28979437300d10da79ec4af9d993bb04ed4e85"} Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.079558 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4dn69" event={"ID":"6de35940-bef4-4dfa-9a83-08ba29d73399","Type":"ContainerStarted","Data":"bab710f268cee7a17d81afe5168b5be33ec92cbaa20fae67c86001bda19ec16b"} Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.079966 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-sv7xw"] Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.116615 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:17 crc kubenswrapper[4714]: E0129 16:12:17.116984 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:17.616970146 +0000 UTC m=+144.137471266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.146427 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v68nn" Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.157328 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nf7jb"] Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.161278 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ch6wr"] Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.183853 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jcdhl" Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.195873 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9thpj"] Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.218440 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:17 crc kubenswrapper[4714]: E0129 16:12:17.218664 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:17.718629074 +0000 UTC m=+144.239130194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.218862 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:17 crc kubenswrapper[4714]: E0129 16:12:17.219953 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:17.719934024 +0000 UTC m=+144.240435144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.229514 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2lpzx"] Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.319736 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:17 crc kubenswrapper[4714]: E0129 16:12:17.319879 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:17.819851399 +0000 UTC m=+144.340352519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.320018 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:17 crc kubenswrapper[4714]: E0129 16:12:17.320426 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:17.820405626 +0000 UTC m=+144.340906766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.421571 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:17 crc kubenswrapper[4714]: E0129 16:12:17.421839 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:17.921770416 +0000 UTC m=+144.442271576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.422318 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:17 crc kubenswrapper[4714]: E0129 16:12:17.422626 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:17.922615012 +0000 UTC m=+144.443116132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.523716 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:17 crc kubenswrapper[4714]: E0129 16:12:17.524179 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:18.024127706 +0000 UTC m=+144.544628826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.627924 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:17 crc kubenswrapper[4714]: E0129 16:12:17.628347 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:18.128333132 +0000 UTC m=+144.648834252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.729531 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:17 crc kubenswrapper[4714]: E0129 16:12:17.732247 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:18.232206478 +0000 UTC m=+144.752707598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.833521 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:17 crc kubenswrapper[4714]: E0129 16:12:17.833850 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:18.333834116 +0000 UTC m=+144.854335236 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.934999 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:17 crc kubenswrapper[4714]: E0129 16:12:17.935105 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:18.435090642 +0000 UTC m=+144.955591752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.935242 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:17 crc kubenswrapper[4714]: E0129 16:12:17.935590 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:18.435582737 +0000 UTC m=+144.956083857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:17 crc kubenswrapper[4714]: I0129 16:12:17.981624 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-r6cxt"] Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.035953 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:18 crc kubenswrapper[4714]: E0129 16:12:18.036119 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:18.53610192 +0000 UTC m=+145.056603040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.036239 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:18 crc kubenswrapper[4714]: E0129 16:12:18.036500 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:18.536491132 +0000 UTC m=+145.056992252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.090063 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mrprd" event={"ID":"554abf87-b1ba-45b1-8130-95b40da3b8bf","Type":"ContainerStarted","Data":"20c7457cb1680b638afb8dc71ad2c9731bfe83fe5accc7acd040c18b0e6bb417"} Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.093179 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ch6wr" event={"ID":"fc8e2d06-1cc2-4ea7-8d87-340d28740e20","Type":"ContainerStarted","Data":"80e18daa72619ece41b19b8e484cfe2d06f7ee1c093574382461cc44012bf2d7"} Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.094511 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-lz6mw" event={"ID":"7a1dfb55-8680-4cbe-bd78-caca2e847caf","Type":"ContainerStarted","Data":"40b83831be51662691c2389228703b2e44d2af6c53cf436bd27964284ccf620f"} Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.095583 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xvrxj" event={"ID":"f3e2f962-69e3-4008-a45f-5c35677f7f36","Type":"ContainerStarted","Data":"a78dd6d9c288aa19f7b8882e1352e2d9efbb00bd10a56122908cef032cacc644"} Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.118133 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nf7jb" event={"ID":"2eab9b06-06e4-4f58-ab86-ab1bf3b5cc96","Type":"ContainerStarted","Data":"f0bde486e4e1af6700632172e82ece5f5d8415f67c21552753efbbf51e1770c6"} Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.128281 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" event={"ID":"3c2d0611-58f8-4a7e-8280-361c80d62802","Type":"ContainerStarted","Data":"1a48e7995b6652b74e320bf25807bcc51cd5f49496615ac9028d0bf40f37019e"} Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.129245 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-zxb4v" event={"ID":"1fd5b799-74c2-4ffa-b3d9-6745c66ba28f","Type":"ContainerStarted","Data":"fc27b02212f07458ccb7ea63701f0795f0a6e29b461d101747658b13ad7ff742"} Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.130161 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9thpj" event={"ID":"d288ee23-1753-48f2-ab82-736defe5fe18","Type":"ContainerStarted","Data":"a6a9144e8ecffc012cb3413e6b06c6e60f1f002771aabaefd491cbe1ccb40491"} Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.130704 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-sv7xw" event={"ID":"b7cf219f-4e80-47fc-b349-ea5c7eab6d9d","Type":"ContainerStarted","Data":"8a0e03bbf7225c254f88d01b6109194d7f1c054597dc2a039f2c879dff5e1628"} Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.131532 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-kvp9d" event={"ID":"ad3c7510-ccc3-453a-91ae-b1f2cf88d2ec","Type":"ContainerStarted","Data":"68e3ecada4f8793d5f95b46b812675b515c4359fd9ffb700b0d202e241aedb17"} Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.134596 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99knh" event={"ID":"5b5da98c-0704-41c7-8563-707f7af93f41","Type":"ContainerStarted","Data":"ca070377a72c571495332a6a2fd95697bd5a37831362a5be412d22918536e812"} Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.136130 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" event={"ID":"c779f8ba-7614-49f1-be6d-a9e316ec59ba","Type":"ContainerStarted","Data":"e783a0f04dc36f9b00a69d43ae0355c9b664ad354b4cdc74b2a094e5b287a951"} Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.136690 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:18 crc kubenswrapper[4714]: E0129 16:12:18.136909 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:18.636888102 +0000 UTC m=+145.157389222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.137068 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:18 crc kubenswrapper[4714]: E0129 16:12:18.137594 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:18.637579543 +0000 UTC m=+145.158080663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.137810 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-m2g9h" event={"ID":"0e2a789d-6a90-4d60-881e-9562cd92e0a7","Type":"ContainerStarted","Data":"3f377532bfc03c12b7dca263550589c81deb919323d24c655c2ca133fb41dca9"} Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.138467 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2lpzx" event={"ID":"c14fb55e-a42b-46c9-9521-6e8b60235166","Type":"ContainerStarted","Data":"a8ceb66d30404f617cd622aafe231f15ba56287a7cd277803365b749533792a7"} Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.140206 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jb6jw" event={"ID":"b25d77ec-57de-4c2a-b534-e98bf149b92a","Type":"ContainerStarted","Data":"867f9250072589a8bc3ab77287b756dc175f3a75811f69552fc6d9db0e0a499a"} Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.238794 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:18 crc kubenswrapper[4714]: E0129 16:12:18.239050 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:18.739027625 +0000 UTC m=+145.259528755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.239146 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:18 crc kubenswrapper[4714]: E0129 16:12:18.239560 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:18.739549191 +0000 UTC m=+145.260050311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.340090 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:18 crc kubenswrapper[4714]: E0129 16:12:18.340832 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:18.840816018 +0000 UTC m=+145.361317138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.441694 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:18 crc kubenswrapper[4714]: E0129 16:12:18.442095 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:18.942083144 +0000 UTC m=+145.462584264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.526253 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzg2m"] Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.543430 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:18 crc kubenswrapper[4714]: E0129 16:12:18.543762 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:19.043746673 +0000 UTC m=+145.564247793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.557784 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l2t56"] Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.562332 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xtzbx"] Jan 29 16:12:18 crc kubenswrapper[4714]: W0129 16:12:18.580348 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc84f60e_094e_4924_b6f1_f0a8ab81aa4e.slice/crio-11084ed18f3b11987189819d9bb3dd492afdb82473bb2a69f517fe602c5a24f7 WatchSource:0}: Error finding container 11084ed18f3b11987189819d9bb3dd492afdb82473bb2a69f517fe602c5a24f7: Status 404 returned error can't find the container with id 11084ed18f3b11987189819d9bb3dd492afdb82473bb2a69f517fe602c5a24f7 Jan 29 16:12:18 crc kubenswrapper[4714]: W0129 16:12:18.619494 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80515d06_c09e_4c9d_a90f_43cc84edf4c9.slice/crio-5880f1855bae3fd6f603655d40b770623f038db9a3cb9db3918877f801567acc WatchSource:0}: Error finding container 5880f1855bae3fd6f603655d40b770623f038db9a3cb9db3918877f801567acc: Status 404 returned error can't find the container with id 5880f1855bae3fd6f603655d40b770623f038db9a3cb9db3918877f801567acc Jan 29 16:12:18 crc kubenswrapper[4714]: W0129 16:12:18.627695 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fa1ede8_3ea3_421d_929d_f6bf9cc1db0e.slice/crio-f4ea3f04681522c185b8823272251a5951ed23cf1e074da3e02d76c59ea7250d WatchSource:0}: Error finding container f4ea3f04681522c185b8823272251a5951ed23cf1e074da3e02d76c59ea7250d: Status 404 returned error can't find the container with id f4ea3f04681522c185b8823272251a5951ed23cf1e074da3e02d76c59ea7250d Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.645280 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:18 crc kubenswrapper[4714]: E0129 16:12:18.647113 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:19.147097993 +0000 UTC m=+145.667599113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.663611 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-44gfk"] Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.668056 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-chzp2"] Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.672442 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ljnh7"] Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.691625 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sq9mx"] Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.695793 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vhtdt"] Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.702154 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4nghl"] Jan 29 16:12:18 crc kubenswrapper[4714]: W0129 16:12:18.707000 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77b31235_8b07_4d66_aec8_64e5b7fae08e.slice/crio-b57b92cf1409311fba74582f0e99821728e24bd5d89e8468fe429886e3a64c84 WatchSource:0}: Error finding container b57b92cf1409311fba74582f0e99821728e24bd5d89e8468fe429886e3a64c84: Status 404 returned error can't find the container with id b57b92cf1409311fba74582f0e99821728e24bd5d89e8468fe429886e3a64c84 Jan 29 16:12:18 crc kubenswrapper[4714]: W0129 16:12:18.707779 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b632d84_c711_419a_9e24_bdb4c6e9aef6.slice/crio-2661bab2f985c589314989ca54a7b8417fe38b5661dd2093e413b43b2402ce34 WatchSource:0}: Error finding container 2661bab2f985c589314989ca54a7b8417fe38b5661dd2093e413b43b2402ce34: Status 404 returned error can't find the container with id 2661bab2f985c589314989ca54a7b8417fe38b5661dd2093e413b43b2402ce34 Jan 29 16:12:18 crc kubenswrapper[4714]: W0129 16:12:18.709289 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38f843fe_c20b_4bc7_8f45_4ccd4b7be5a5.slice/crio-92cdec071830256f624ca0c36fdd787e417207e233c1063a8ab14f07ebc95d1f WatchSource:0}: Error finding container 92cdec071830256f624ca0c36fdd787e417207e233c1063a8ab14f07ebc95d1f: Status 404 returned error can't find the container with id 92cdec071830256f624ca0c36fdd787e417207e233c1063a8ab14f07ebc95d1f Jan 29 16:12:18 crc kubenswrapper[4714]: W0129 16:12:18.714110 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8062d225_aa57_48df_bf28_2254ecc4f635.slice/crio-951697dbf7c719b339005199178b35e78824e9709406b5beed446e6875f95f52 WatchSource:0}: Error finding container 951697dbf7c719b339005199178b35e78824e9709406b5beed446e6875f95f52: Status 404 returned error can't find the container with id 951697dbf7c719b339005199178b35e78824e9709406b5beed446e6875f95f52 Jan 29 16:12:18 crc kubenswrapper[4714]: W0129 16:12:18.716391 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02158b16_2eb1_4b8e_b1bb_55285b22d053.slice/crio-33677f709ea45fe339c09a2748d7e700229caf6e46e524c0ee7292bfc9a240d9 WatchSource:0}: Error finding container 33677f709ea45fe339c09a2748d7e700229caf6e46e524c0ee7292bfc9a240d9: Status 404 returned error can't find the container with id 33677f709ea45fe339c09a2748d7e700229caf6e46e524c0ee7292bfc9a240d9 Jan 29 16:12:18 crc kubenswrapper[4714]: W0129 16:12:18.719648 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod714cef39_2960_4a25_ac81_a4e65a115eb3.slice/crio-ea583be98e43e32e2b9eaab9f7b14cfcc6c03d18a9e1e7bc8ee858d8764bccfb WatchSource:0}: Error finding container ea583be98e43e32e2b9eaab9f7b14cfcc6c03d18a9e1e7bc8ee858d8764bccfb: Status 404 returned error can't find the container with id ea583be98e43e32e2b9eaab9f7b14cfcc6c03d18a9e1e7bc8ee858d8764bccfb Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.748822 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:18 crc kubenswrapper[4714]: E0129 16:12:18.748983 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:19.248965068 +0000 UTC m=+145.769466188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.749059 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:18 crc kubenswrapper[4714]: E0129 16:12:18.749321 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:19.249313838 +0000 UTC m=+145.769814958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.805157 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kfqcf"] Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.807522 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495040-5mkf8"] Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.814220 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbrmk"] Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.834202 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zkbcz"] Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.850175 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:18 crc kubenswrapper[4714]: E0129 16:12:18.850370 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:19.350344057 +0000 UTC m=+145.870845177 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.850622 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:18 crc kubenswrapper[4714]: E0129 16:12:18.850963 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:19.350929035 +0000 UTC m=+145.871430155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.854582 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v68nn"] Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.858753 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jcdhl"] Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.952032 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:18 crc kubenswrapper[4714]: E0129 16:12:18.952235 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:19.452206672 +0000 UTC m=+145.972707802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:18 crc kubenswrapper[4714]: I0129 16:12:18.952517 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:18 crc kubenswrapper[4714]: E0129 16:12:18.952892 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:19.452879963 +0000 UTC m=+145.973381083 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:19 crc kubenswrapper[4714]: W0129 16:12:19.054213 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb979c55_3027_4d92_94b9_cd17c32e6331.slice/crio-64dd0aa019d2ff78b021ae7336dcb6b797f2fc222a74d3ec8b1b6722bf60513e WatchSource:0}: Error finding container 64dd0aa019d2ff78b021ae7336dcb6b797f2fc222a74d3ec8b1b6722bf60513e: Status 404 returned error can't find the container with id 64dd0aa019d2ff78b021ae7336dcb6b797f2fc222a74d3ec8b1b6722bf60513e Jan 29 16:12:19 crc kubenswrapper[4714]: W0129 16:12:19.055455 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod706713ee_0ea2_4018_847c_ccf3a0fafb1c.slice/crio-ac69bf925281e62c711bc61e1e7af18c65e142666843c7ebd8119153dbb6c812 WatchSource:0}: Error finding container ac69bf925281e62c711bc61e1e7af18c65e142666843c7ebd8119153dbb6c812: Status 404 returned error can't find the container with id ac69bf925281e62c711bc61e1e7af18c65e142666843c7ebd8119153dbb6c812 Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.055459 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:19 crc kubenswrapper[4714]: E0129 16:12:19.055623 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:19.555591473 +0000 UTC m=+146.076092633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.056382 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:19 crc kubenswrapper[4714]: E0129 16:12:19.059593 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:19.559568845 +0000 UTC m=+146.080069995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:19 crc kubenswrapper[4714]: W0129 16:12:19.063166 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fbdedcd_e0ce_4fe9_b19a_53d422b1fcab.slice/crio-bd5ddda0b956be079adf5a95904bc40a5bfab574b272bcc0aace47eca5063bd1 WatchSource:0}: Error finding container bd5ddda0b956be079adf5a95904bc40a5bfab574b272bcc0aace47eca5063bd1: Status 404 returned error can't find the container with id bd5ddda0b956be079adf5a95904bc40a5bfab574b272bcc0aace47eca5063bd1 Jan 29 16:12:19 crc kubenswrapper[4714]: W0129 16:12:19.076114 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadc3900b_dce0_4da4_bfc2_bca85b2395b2.slice/crio-f8050531b7ff6cac579652a7da90a372a37a8c79cc21280f1c40da4fd018f9e9 WatchSource:0}: Error finding container f8050531b7ff6cac579652a7da90a372a37a8c79cc21280f1c40da4fd018f9e9: Status 404 returned error can't find the container with id f8050531b7ff6cac579652a7da90a372a37a8c79cc21280f1c40da4fd018f9e9 Jan 29 16:12:19 crc kubenswrapper[4714]: W0129 16:12:19.079100 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcaee576_dff0_4a67_a0b1_7347b3030729.slice/crio-03a0c2abcdf4f7b3d6c47dcfce1a4e19bf09e285b4ddaa4cb1ab60d493e59e52 WatchSource:0}: Error finding container 03a0c2abcdf4f7b3d6c47dcfce1a4e19bf09e285b4ddaa4cb1ab60d493e59e52: Status 404 returned error can't find the container with id 03a0c2abcdf4f7b3d6c47dcfce1a4e19bf09e285b4ddaa4cb1ab60d493e59e52 Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.157679 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:19 crc kubenswrapper[4714]: E0129 16:12:19.157906 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:19.65787336 +0000 UTC m=+146.178374520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.158058 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:19 crc kubenswrapper[4714]: E0129 16:12:19.158375 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:19.658364895 +0000 UTC m=+146.178866015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.164291 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbrmk" event={"ID":"adc3900b-dce0-4da4-bfc2-bca85b2395b2","Type":"ContainerStarted","Data":"f8050531b7ff6cac579652a7da90a372a37a8c79cc21280f1c40da4fd018f9e9"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.168010 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw" event={"ID":"fbfdd647-1d64-4d35-9af2-6dee52b4c860","Type":"ContainerStarted","Data":"3e427856d41457853f68d5a32dc2f87168c0f52da592bc0a2b3db34f710fea2b"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.170036 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4nghl" event={"ID":"714cef39-2960-4a25-ac81-a4e65a115eb3","Type":"ContainerStarted","Data":"ea583be98e43e32e2b9eaab9f7b14cfcc6c03d18a9e1e7bc8ee858d8764bccfb"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.172116 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mrprd" event={"ID":"554abf87-b1ba-45b1-8130-95b40da3b8bf","Type":"ContainerStarted","Data":"e67c2cb59688caec6df23f5f60f848dd46db1badae2c1e268c4e6220c2473c3d"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.174456 4714 generic.go:334] "Generic (PLEG): container finished" podID="c779f8ba-7614-49f1-be6d-a9e316ec59ba" containerID="e783a0f04dc36f9b00a69d43ae0355c9b664ad354b4cdc74b2a094e5b287a951" exitCode=0 Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.174703 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" event={"ID":"c779f8ba-7614-49f1-be6d-a9e316ec59ba","Type":"ContainerDied","Data":"e783a0f04dc36f9b00a69d43ae0355c9b664ad354b4cdc74b2a094e5b287a951"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.177009 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ch6wr" event={"ID":"fc8e2d06-1cc2-4ea7-8d87-340d28740e20","Type":"ContainerStarted","Data":"06482b6470fadce059730244f351ad7eba0145dba4d03ba4db08d52aaec2d67b"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.181358 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zkbcz" event={"ID":"f18250a8-66c1-445d-9452-081de13b24f7","Type":"ContainerStarted","Data":"da41d782933837b7976ce98d7f7dceca41fef963f862c55dd1a2cbca9a125e7e"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.182649 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vhtdt" event={"ID":"02158b16-2eb1-4b8e-b1bb-55285b22d053","Type":"ContainerStarted","Data":"33677f709ea45fe339c09a2748d7e700229caf6e46e524c0ee7292bfc9a240d9"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.184583 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ljnh7" event={"ID":"38f843fe-c20b-4bc7-8f45-4ccd4b7be5a5","Type":"ContainerStarted","Data":"92cdec071830256f624ca0c36fdd787e417207e233c1063a8ab14f07ebc95d1f"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.186637 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcj84" event={"ID":"eacb9f84-018a-4f64-b211-c9bedce50b9e","Type":"ContainerStarted","Data":"75c58bd7a88a96370485f93d6e2d9e0f9290e9433152047ce26bb9ea94c2b98d"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.188890 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-44gfk" event={"ID":"77b31235-8b07-4d66-aec8-64e5b7fae08e","Type":"ContainerStarted","Data":"b57b92cf1409311fba74582f0e99821728e24bd5d89e8468fe429886e3a64c84"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.204102 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dwsm5" event={"ID":"8f71ba3e-c687-4ff7-9475-1e18ded764f6","Type":"ContainerStarted","Data":"fa826543db32457e92c9cbe60c432fd9da541a48d0e6863d88c03b0803b71041"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.205562 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzg2m" event={"ID":"cc84f60e-094e-4924-b6f1-f0a8ab81aa4e","Type":"ContainerStarted","Data":"11084ed18f3b11987189819d9bb3dd492afdb82473bb2a69f517fe602c5a24f7"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.215758 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcj84" podStartSLOduration=125.215738859 podStartE2EDuration="2m5.215738859s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:19.213224062 +0000 UTC m=+145.733725192" watchObservedRunningTime="2026-01-29 16:12:19.215738859 +0000 UTC m=+145.736239979" Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.221249 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-chzp2" event={"ID":"1b632d84-c711-419a-9e24-bdb4c6e9aef6","Type":"ContainerStarted","Data":"2661bab2f985c589314989ca54a7b8417fe38b5661dd2093e413b43b2402ce34"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.224444 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r6cxt" event={"ID":"1d9869e2-6f55-4246-8ed0-b8af9dab3f74","Type":"ContainerStarted","Data":"55994348ab3dfc8cafc62c6e955628ce51995a40f12003eed7813e37443926ec"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.226628 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xtzbx" event={"ID":"2fa1ede8-3ea3-421d-929d-f6bf9cc1db0e","Type":"ContainerStarted","Data":"f4ea3f04681522c185b8823272251a5951ed23cf1e074da3e02d76c59ea7250d"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.228680 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sq9mx" event={"ID":"8062d225-aa57-48df-bf28-2254ecc4f635","Type":"ContainerStarted","Data":"951697dbf7c719b339005199178b35e78824e9709406b5beed446e6875f95f52"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.232630 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cp5md" event={"ID":"0387af3d-8796-46b0-9282-9ecbda7fe3a7","Type":"ContainerStarted","Data":"39550e40b8e5b05c9cdd0a131397b8c7809c049c26f2073f6e73cf2dc88c7bb7"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.236262 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-z4h55" event={"ID":"bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92","Type":"ContainerStarted","Data":"92d27e916647ca2c73b1e771eece4264778a1470633157345c9df0fb0f9c40df"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.238666 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nf7jb" event={"ID":"2eab9b06-06e4-4f58-ab86-ab1bf3b5cc96","Type":"ContainerStarted","Data":"d0ace5c4217a196b554ce0fd58847e9caf744ec08f66dd02e26fdb147fb683d2"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.248753 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-l2t56" event={"ID":"80515d06-c09e-4c9d-a90f-43cc84edf4c9","Type":"ContainerStarted","Data":"5880f1855bae3fd6f603655d40b770623f038db9a3cb9db3918877f801567acc"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.257914 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9thpj" event={"ID":"d288ee23-1753-48f2-ab82-736defe5fe18","Type":"ContainerStarted","Data":"3bfaf3a8d15fe9b3351c3d20e4177a0beef558e3311170e7b013daf0db5da97c"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.259585 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:19 crc kubenswrapper[4714]: E0129 16:12:19.261452 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:19.761421796 +0000 UTC m=+146.281922916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.281885 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4dn69" event={"ID":"6de35940-bef4-4dfa-9a83-08ba29d73399","Type":"ContainerStarted","Data":"f9e1074728408def2a387a3b251afa2257fdc6dcfa08bf7a9f5de414199e7c75"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.284332 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v68nn" event={"ID":"2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab","Type":"ContainerStarted","Data":"bd5ddda0b956be079adf5a95904bc40a5bfab574b272bcc0aace47eca5063bd1"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.286348 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nh2m9" event={"ID":"97cfbecd-36ef-409b-94e9-f607a1fa2c42","Type":"ContainerStarted","Data":"39b17c10dcdb044358f9409b55dce81fdd9556f01f697e8d12121445449b6325"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.287477 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5mkf8" event={"ID":"cb979c55-3027-4d92-94b9-cd17c32e6331","Type":"ContainerStarted","Data":"64dd0aa019d2ff78b021ae7336dcb6b797f2fc222a74d3ec8b1b6722bf60513e"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.288631 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-kfqcf" event={"ID":"706713ee-0ea2-4018-847c-ccf3a0fafb1c","Type":"ContainerStarted","Data":"ac69bf925281e62c711bc61e1e7af18c65e142666843c7ebd8119153dbb6c812"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.289393 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jcdhl" event={"ID":"fcaee576-dff0-4a67-a0b1-7347b3030729","Type":"ContainerStarted","Data":"03a0c2abcdf4f7b3d6c47dcfce1a4e19bf09e285b4ddaa4cb1ab60d493e59e52"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.292279 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-zxb4v" event={"ID":"1fd5b799-74c2-4ffa-b3d9-6745c66ba28f","Type":"ContainerStarted","Data":"35d7262eb0de9b8a75316a252f9d4e76314213d335107d88f35442d4a8a41b55"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.294613 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fn75b" event={"ID":"42b66dc3-a385-4350-a943-50f062da35f7","Type":"ContainerStarted","Data":"c4aaec06be7df88764d0dc745049e2e561eb871b6ccb463e86a9ef317a262a34"} Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.295453 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.298009 4714 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-xlczd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.298045 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" podUID="3c2d0611-58f8-4a7e-8280-361c80d62802" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.314911 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-jb6jw" podStartSLOduration=125.314895681 podStartE2EDuration="2m5.314895681s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:19.311018483 +0000 UTC m=+145.831519603" watchObservedRunningTime="2026-01-29 16:12:19.314895681 +0000 UTC m=+145.835396801" Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.328938 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" podStartSLOduration=125.32891746 podStartE2EDuration="2m5.32891746s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:19.328230879 +0000 UTC m=+145.848731989" watchObservedRunningTime="2026-01-29 16:12:19.32891746 +0000 UTC m=+145.849418600" Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.345801 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-kvp9d" podStartSLOduration=125.345779846 podStartE2EDuration="2m5.345779846s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:19.343777014 +0000 UTC m=+145.864278134" watchObservedRunningTime="2026-01-29 16:12:19.345779846 +0000 UTC m=+145.866280966" Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.361052 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:19 crc kubenswrapper[4714]: E0129 16:12:19.362760 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:19.862739874 +0000 UTC m=+146.383241114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.363591 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-m2g9h" podStartSLOduration=125.36357874 podStartE2EDuration="2m5.36357874s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:19.358008039 +0000 UTC m=+145.878509169" watchObservedRunningTime="2026-01-29 16:12:19.36357874 +0000 UTC m=+145.884079860" Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.461803 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:19 crc kubenswrapper[4714]: E0129 16:12:19.461969 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:19.961946838 +0000 UTC m=+146.482447958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.462440 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:19 crc kubenswrapper[4714]: E0129 16:12:19.462747 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:19.962733482 +0000 UTC m=+146.483234602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.563338 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:19 crc kubenswrapper[4714]: E0129 16:12:19.563705 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:20.063690729 +0000 UTC m=+146.584191849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.664997 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:19 crc kubenswrapper[4714]: E0129 16:12:19.665485 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:20.165468981 +0000 UTC m=+146.685970101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.766002 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:19 crc kubenswrapper[4714]: E0129 16:12:19.766623 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:20.266592223 +0000 UTC m=+146.787093393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.867974 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:19 crc kubenswrapper[4714]: E0129 16:12:19.868333 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:20.368318483 +0000 UTC m=+146.888819593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.969513 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:19 crc kubenswrapper[4714]: E0129 16:12:19.969731 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:20.469682573 +0000 UTC m=+146.990183733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:19 crc kubenswrapper[4714]: I0129 16:12:19.969865 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:19 crc kubenswrapper[4714]: E0129 16:12:19.970312 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:20.470299222 +0000 UTC m=+146.990800342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.071491 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:20 crc kubenswrapper[4714]: E0129 16:12:20.071803 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:20.571754694 +0000 UTC m=+147.092255874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.072081 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:20 crc kubenswrapper[4714]: E0129 16:12:20.072653 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:20.57262651 +0000 UTC m=+147.093127670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.173500 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:20 crc kubenswrapper[4714]: E0129 16:12:20.173698 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:20.67366758 +0000 UTC m=+147.194168700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.173894 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:20 crc kubenswrapper[4714]: E0129 16:12:20.174420 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:20.674399352 +0000 UTC m=+147.194900492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.274791 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:20 crc kubenswrapper[4714]: E0129 16:12:20.275002 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:20.774973738 +0000 UTC m=+147.295474858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.275391 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:20 crc kubenswrapper[4714]: E0129 16:12:20.275737 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:20.775729431 +0000 UTC m=+147.296230551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.300094 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-sv7xw" event={"ID":"b7cf219f-4e80-47fc-b349-ea5c7eab6d9d","Type":"ContainerStarted","Data":"8a2fbce67ca39d9a948f00ef51d06e70b14ca049256c8ba6aa1a36330e72e95a"} Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.302055 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xvrxj" event={"ID":"f3e2f962-69e3-4008-a45f-5c35677f7f36","Type":"ContainerStarted","Data":"a45df571b922ec5a4ee61153c6edaae1fa7df4a72a624772ba8475e87c5aff9f"} Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.303976 4714 generic.go:334] "Generic (PLEG): container finished" podID="8f71ba3e-c687-4ff7-9475-1e18ded764f6" containerID="fa826543db32457e92c9cbe60c432fd9da541a48d0e6863d88c03b0803b71041" exitCode=0 Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.304077 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dwsm5" event={"ID":"8f71ba3e-c687-4ff7-9475-1e18ded764f6","Type":"ContainerDied","Data":"fa826543db32457e92c9cbe60c432fd9da541a48d0e6863d88c03b0803b71041"} Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.307094 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzg2m" event={"ID":"cc84f60e-094e-4924-b6f1-f0a8ab81aa4e","Type":"ContainerStarted","Data":"bd220881ff94f1948e3120bb7930942c7b879cd14685003c27e9c5391678f090"} Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.309055 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2lpzx" event={"ID":"c14fb55e-a42b-46c9-9521-6e8b60235166","Type":"ContainerStarted","Data":"a29d609a72c47494d9ab4a7af0ac3a2aa54fc3e5f33214e89b0bab625418fc8e"} Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.310787 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-lz6mw" event={"ID":"7a1dfb55-8680-4cbe-bd78-caca2e847caf","Type":"ContainerStarted","Data":"61719cba311aad8609ab840baea1cfa296842ec4a73281d02b334d955086776e"} Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.312486 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" event={"ID":"832097a5-4691-42b6-99cc-38679071d5ee","Type":"ContainerStarted","Data":"9bc1cd1aee2de10059a78dc94ebfc1cb6a64c8e3be39806bf6e6ab107f47abff"} Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.314695 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99knh" event={"ID":"5b5da98c-0704-41c7-8563-707f7af93f41","Type":"ContainerStarted","Data":"581fea08a13cda1ef3ef4dbc47396d1c7474fc8fd004dcb57a27254dd7af9194"} Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.316421 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xtzbx" event={"ID":"2fa1ede8-3ea3-421d-929d-f6bf9cc1db0e","Type":"ContainerStarted","Data":"f0266f10504adfad4bf26dc8ced4f8847780aadf6b78ced8e81d31e0da10384f"} Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.318294 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-l2t56" event={"ID":"80515d06-c09e-4c9d-a90f-43cc84edf4c9","Type":"ContainerStarted","Data":"2cfb1164c8a5f24d11bd2a23214b6a7408be50990447c790085e11ea6faaec50"} Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.320104 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r6cxt" event={"ID":"1d9869e2-6f55-4246-8ed0-b8af9dab3f74","Type":"ContainerStarted","Data":"b12fdb98d7913f55c00f8037769750024c6d7d15e2f1695d2fe08e49e4dde6c0"} Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.323198 4714 generic.go:334] "Generic (PLEG): container finished" podID="99bab267-639b-48b1-abc4-8c0373200a39" containerID="d74d43e34a1e11efc05f19a35eac3bfba23a056d00217dd74cd8226d7e7f07e3" exitCode=0 Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.323339 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6jl75" event={"ID":"99bab267-639b-48b1-abc4-8c0373200a39","Type":"ContainerDied","Data":"d74d43e34a1e11efc05f19a35eac3bfba23a056d00217dd74cd8226d7e7f07e3"} Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.323785 4714 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-xlczd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.323826 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" podUID="3c2d0611-58f8-4a7e-8280-361c80d62802" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.324328 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-fn75b" Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.335597 4714 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn75b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.335674 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fn75b" podUID="42b66dc3-a385-4350-a943-50f062da35f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.358696 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw" podStartSLOduration=125.358679767 podStartE2EDuration="2m5.358679767s" podCreationTimestamp="2026-01-29 16:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:20.357046337 +0000 UTC m=+146.877547457" watchObservedRunningTime="2026-01-29 16:12:20.358679767 +0000 UTC m=+146.879180877" Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.376809 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:20 crc kubenswrapper[4714]: E0129 16:12:20.377014 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:20.876993327 +0000 UTC m=+147.397494447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.377430 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:20 crc kubenswrapper[4714]: E0129 16:12:20.379688 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:20.879669929 +0000 UTC m=+147.400171059 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.388969 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-fn75b" podStartSLOduration=126.388950003 podStartE2EDuration="2m6.388950003s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:20.388388116 +0000 UTC m=+146.908889246" watchObservedRunningTime="2026-01-29 16:12:20.388950003 +0000 UTC m=+146.909451123" Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.412964 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cp5md" podStartSLOduration=126.412947176 podStartE2EDuration="2m6.412947176s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:20.412502363 +0000 UTC m=+146.933003503" watchObservedRunningTime="2026-01-29 16:12:20.412947176 +0000 UTC m=+146.933448296" Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.478146 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:20 crc kubenswrapper[4714]: E0129 16:12:20.478246 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:20.978230423 +0000 UTC m=+147.498731543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:20 crc kubenswrapper[4714]: E0129 16:12:20.478776 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:20.978767999 +0000 UTC m=+147.499269119 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.478994 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.579724 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:20 crc kubenswrapper[4714]: E0129 16:12:20.580286 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:21.080266673 +0000 UTC m=+147.600767793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.682095 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:20 crc kubenswrapper[4714]: E0129 16:12:20.682532 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:21.182517529 +0000 UTC m=+147.703018649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.782868 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:20 crc kubenswrapper[4714]: E0129 16:12:20.783148 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:21.283120655 +0000 UTC m=+147.803621775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.783237 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:20 crc kubenswrapper[4714]: E0129 16:12:20.783679 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:21.283667552 +0000 UTC m=+147.804168672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.884010 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:20 crc kubenswrapper[4714]: E0129 16:12:20.884180 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:21.384157505 +0000 UTC m=+147.904658625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.884363 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:20 crc kubenswrapper[4714]: E0129 16:12:20.884698 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:21.384685621 +0000 UTC m=+147.905186741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.985719 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:20 crc kubenswrapper[4714]: E0129 16:12:20.985915 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:21.485884095 +0000 UTC m=+148.006385245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:20 crc kubenswrapper[4714]: I0129 16:12:20.986456 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:20 crc kubenswrapper[4714]: E0129 16:12:20.986727 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:21.48671455 +0000 UTC m=+148.007215670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.087806 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:21 crc kubenswrapper[4714]: E0129 16:12:21.088089 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:21.588055519 +0000 UTC m=+148.108556679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.088336 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:21 crc kubenswrapper[4714]: E0129 16:12:21.088716 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:21.588700159 +0000 UTC m=+148.109201279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.189210 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:21 crc kubenswrapper[4714]: E0129 16:12:21.189467 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:21.689431839 +0000 UTC m=+148.209932989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.291519 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:21 crc kubenswrapper[4714]: E0129 16:12:21.292163 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:21.792136189 +0000 UTC m=+148.312637319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.332124 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-chzp2" event={"ID":"1b632d84-c711-419a-9e24-bdb4c6e9aef6","Type":"ContainerStarted","Data":"6bc694dad0122356b41ee216407b3a7bf24cc0a80090d885b685230aef3c95b4"} Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.333963 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sq9mx" event={"ID":"8062d225-aa57-48df-bf28-2254ecc4f635","Type":"ContainerStarted","Data":"dbb3fe5cab326b40963c30577a076031ca8d1773b098c85a11b511d73b7cab55"} Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.335864 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5mkf8" event={"ID":"cb979c55-3027-4d92-94b9-cd17c32e6331","Type":"ContainerStarted","Data":"8002880bca0bfedb17cd3285afc46c8ef8627aa8d830b5e5901fd7ddfd3b1e33"} Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.337683 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-44gfk" event={"ID":"77b31235-8b07-4d66-aec8-64e5b7fae08e","Type":"ContainerStarted","Data":"76badda10c81fd0945af3652cd492c0ee46a055ba17a3754657a75e06e651f6a"} Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.339073 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbrmk" event={"ID":"adc3900b-dce0-4da4-bfc2-bca85b2395b2","Type":"ContainerStarted","Data":"c0dcbdcfc2435eacfaf4a331bec2b4bceb340d25fa4fa74db5426f3a18408104"} Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.340449 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vhtdt" event={"ID":"02158b16-2eb1-4b8e-b1bb-55285b22d053","Type":"ContainerStarted","Data":"ece2bca6f5f907bdaaef3f0e5763d781791c7e86b194dbdc6b5211b4a92b9f39"} Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.341780 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ljnh7" event={"ID":"38f843fe-c20b-4bc7-8f45-4ccd4b7be5a5","Type":"ContainerStarted","Data":"12970cdc9701917e728084155eb6d878339e9df9344d7dd59010727b1bb27fcc"} Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.343277 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v68nn" event={"ID":"2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab","Type":"ContainerStarted","Data":"a3188f76f9132bb5a0195bb3f3bbc92354939921ea4aa61b37b491752b4801ba"} Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.344678 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-kfqcf" event={"ID":"706713ee-0ea2-4018-847c-ccf3a0fafb1c","Type":"ContainerStarted","Data":"3f153778ee3d6c5b312901d4605b400f6c87dd4cb70a25759341615692777c20"} Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.346092 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zkbcz" event={"ID":"f18250a8-66c1-445d-9452-081de13b24f7","Type":"ContainerStarted","Data":"d51164f9bf54c0d21fc3ca36f0a0590a0e717bae11edde541ec1b0640a070ea6"} Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.347603 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jcdhl" event={"ID":"fcaee576-dff0-4a67-a0b1-7347b3030729","Type":"ContainerStarted","Data":"ae8bbcc798871173e7ad0623699e7fd4462664f064c96ab7c3895b33345e3b8d"} Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.350153 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" event={"ID":"c779f8ba-7614-49f1-be6d-a9e316ec59ba","Type":"ContainerStarted","Data":"36273e1b01e65c4da1c39175bd578ab7ee8cf8f96e70f3732cc6c1016c11785a"} Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.351119 4714 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn75b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.351143 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-l2t56" Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.351166 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fn75b" podUID="42b66dc3-a385-4350-a943-50f062da35f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.352364 4714 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-l2t56 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.352398 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-l2t56" podUID="80515d06-c09e-4c9d-a90f-43cc84edf4c9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.376554 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-nh2m9" podStartSLOduration=127.37653799 podStartE2EDuration="2m7.37653799s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:21.375428016 +0000 UTC m=+147.895929136" watchObservedRunningTime="2026-01-29 16:12:21.37653799 +0000 UTC m=+147.897039110" Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.392794 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:21 crc kubenswrapper[4714]: E0129 16:12:21.393246 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:21.893204779 +0000 UTC m=+148.413705909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.393563 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:21 crc kubenswrapper[4714]: E0129 16:12:21.396801 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:21.896781269 +0000 UTC m=+148.417282599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.400009 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-lz6mw" podStartSLOduration=127.399990727 podStartE2EDuration="2m7.399990727s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:21.398964366 +0000 UTC m=+147.919465496" watchObservedRunningTime="2026-01-29 16:12:21.399990727 +0000 UTC m=+147.920491847" Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.463151 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2lpzx" podStartSLOduration=127.463128038 podStartE2EDuration="2m7.463128038s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:21.461206419 +0000 UTC m=+147.981707539" watchObservedRunningTime="2026-01-29 16:12:21.463128038 +0000 UTC m=+147.983629158" Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.489187 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-l2t56" podStartSLOduration=126.489167624 podStartE2EDuration="2m6.489167624s" podCreationTimestamp="2026-01-29 16:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:21.488751771 +0000 UTC m=+148.009252891" watchObservedRunningTime="2026-01-29 16:12:21.489167624 +0000 UTC m=+148.009668744" Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.497562 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:21 crc kubenswrapper[4714]: E0129 16:12:21.497834 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:21.997807708 +0000 UTC m=+148.518308828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.546575 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mrprd" podStartSLOduration=127.546556919 podStartE2EDuration="2m7.546556919s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:21.527594209 +0000 UTC m=+148.048095329" watchObservedRunningTime="2026-01-29 16:12:21.546556919 +0000 UTC m=+148.067058029" Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.572885 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-zxb4v" podStartSLOduration=8.572866193 podStartE2EDuration="8.572866193s" podCreationTimestamp="2026-01-29 16:12:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:21.547141606 +0000 UTC m=+148.067642726" watchObservedRunningTime="2026-01-29 16:12:21.572866193 +0000 UTC m=+148.093367313" Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.573637 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" podStartSLOduration=127.573628876 podStartE2EDuration="2m7.573628876s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:21.570364847 +0000 UTC m=+148.090865967" watchObservedRunningTime="2026-01-29 16:12:21.573628876 +0000 UTC m=+148.094130006" Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.590426 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ch6wr" podStartSLOduration=126.590404589 podStartE2EDuration="2m6.590404589s" podCreationTimestamp="2026-01-29 16:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:21.589349317 +0000 UTC m=+148.109850437" watchObservedRunningTime="2026-01-29 16:12:21.590404589 +0000 UTC m=+148.110905709" Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.603677 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:21 crc kubenswrapper[4714]: E0129 16:12:21.604145 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:22.104131379 +0000 UTC m=+148.624632499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.613679 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4dn69" podStartSLOduration=127.61363821 podStartE2EDuration="2m7.61363821s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:21.609671899 +0000 UTC m=+148.130173019" watchObservedRunningTime="2026-01-29 16:12:21.61363821 +0000 UTC m=+148.134139330" Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.704469 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:21 crc kubenswrapper[4714]: E0129 16:12:21.704754 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:22.204725385 +0000 UTC m=+148.725226525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.704852 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:21 crc kubenswrapper[4714]: E0129 16:12:21.705244 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:22.205235001 +0000 UTC m=+148.725736111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.738357 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-lz6mw" Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.741174 4714 patch_prober.go:28] interesting pod/router-default-5444994796-lz6mw container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.741327 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lz6mw" podUID="7a1dfb55-8680-4cbe-bd78-caca2e847caf" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.805720 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:21 crc kubenswrapper[4714]: E0129 16:12:21.805891 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:22.305863988 +0000 UTC m=+148.826365108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.806141 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:21 crc kubenswrapper[4714]: E0129 16:12:21.806486 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:22.306474486 +0000 UTC m=+148.826975606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.908083 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:21 crc kubenswrapper[4714]: E0129 16:12:21.908257 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:22.408233238 +0000 UTC m=+148.928734358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:21 crc kubenswrapper[4714]: I0129 16:12:21.908373 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:21 crc kubenswrapper[4714]: E0129 16:12:21.908695 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:22.408686422 +0000 UTC m=+148.929187542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.009735 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:22 crc kubenswrapper[4714]: E0129 16:12:22.009903 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:22.509879486 +0000 UTC m=+149.030380606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.010091 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:22 crc kubenswrapper[4714]: E0129 16:12:22.010694 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:22.51066661 +0000 UTC m=+149.031167730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.111054 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:22 crc kubenswrapper[4714]: E0129 16:12:22.111434 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:22.611419341 +0000 UTC m=+149.131920461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.212307 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:22 crc kubenswrapper[4714]: E0129 16:12:22.212753 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:22.712734989 +0000 UTC m=+149.233236129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.313657 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:22 crc kubenswrapper[4714]: E0129 16:12:22.313837 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:22.813810509 +0000 UTC m=+149.334311629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.313968 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:22 crc kubenswrapper[4714]: E0129 16:12:22.314269 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:22.814257133 +0000 UTC m=+149.334758253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.356676 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99knh" event={"ID":"5b5da98c-0704-41c7-8563-707f7af93f41","Type":"ContainerStarted","Data":"fdfa177d70738b859553617206bc689e14aecd644fa4b96d1292a33aa982514a"} Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.358285 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9thpj" event={"ID":"d288ee23-1753-48f2-ab82-736defe5fe18","Type":"ContainerStarted","Data":"5217358d817ce7eff91c5ce92675d69f303d8fb317d488bdfcb19f7a4a81e957"} Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.360744 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6jl75" event={"ID":"99bab267-639b-48b1-abc4-8c0373200a39","Type":"ContainerStarted","Data":"0649707319cf315cb884412df9aee13b92a185a2ed35bc0133278b8e0ac6b87a"} Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.362431 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r6cxt" event={"ID":"1d9869e2-6f55-4246-8ed0-b8af9dab3f74","Type":"ContainerStarted","Data":"20a2b239f390c2ee891200c241fcd960b389bf031f732ea731cef1b393235d6f"} Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.368701 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-z4h55" event={"ID":"bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92","Type":"ContainerStarted","Data":"e62dd24fddcd7bdb174cdd4cfdba6907f75f804fea4a0898502c6d9dac1157bd"} Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.371184 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-sv7xw" event={"ID":"b7cf219f-4e80-47fc-b349-ea5c7eab6d9d","Type":"ContainerStarted","Data":"e19b79c6c66557c4fe43cea947e82f0138aa0b8582b2b23582a033be4c765462"} Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.372847 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nf7jb" event={"ID":"2eab9b06-06e4-4f58-ab86-ab1bf3b5cc96","Type":"ContainerStarted","Data":"4a8a76e56d6dacf387ad607f5c069cb41422dd571ddbdedad5ba541c251edfae"} Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.375578 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dwsm5" event={"ID":"8f71ba3e-c687-4ff7-9475-1e18ded764f6","Type":"ContainerStarted","Data":"0280359fe4f163eeaed24796981149340b0af92903e6c783ada39027df0eb8e3"} Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.376480 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v68nn" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.376489 4714 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-l2t56 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.376549 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-l2t56" podUID="80515d06-c09e-4c9d-a90f-43cc84edf4c9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.380061 4714 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-v68nn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.380117 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v68nn" podUID="2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.395095 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9thpj" podStartSLOduration=128.395074014 podStartE2EDuration="2m8.395074014s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:22.374721932 +0000 UTC m=+148.895223052" watchObservedRunningTime="2026-01-29 16:12:22.395074014 +0000 UTC m=+148.915575134" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.415662 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-kfqcf" podStartSLOduration=127.415645223 podStartE2EDuration="2m7.415645223s" podCreationTimestamp="2026-01-29 16:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:22.41490972 +0000 UTC m=+148.935410850" watchObservedRunningTime="2026-01-29 16:12:22.415645223 +0000 UTC m=+148.936146343" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.417401 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-z4h55" podStartSLOduration=127.417390406 podStartE2EDuration="2m7.417390406s" podCreationTimestamp="2026-01-29 16:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:22.397641632 +0000 UTC m=+148.918142762" watchObservedRunningTime="2026-01-29 16:12:22.417390406 +0000 UTC m=+148.937891526" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.426196 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:22 crc kubenswrapper[4714]: E0129 16:12:22.426407 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:22.926379291 +0000 UTC m=+149.446880411 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.426590 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.426631 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.426657 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.426701 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.426998 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:22 crc kubenswrapper[4714]: E0129 16:12:22.432670 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:22.932650423 +0000 UTC m=+149.453151623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.434632 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.435577 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.436902 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.439730 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5mkf8" podStartSLOduration=127.439720769 podStartE2EDuration="2m7.439720769s" podCreationTimestamp="2026-01-29 16:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:22.438238924 +0000 UTC m=+148.958740054" watchObservedRunningTime="2026-01-29 16:12:22.439720769 +0000 UTC m=+148.960221879" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.441177 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.468063 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ljnh7" podStartSLOduration=127.468043795 podStartE2EDuration="2m7.468043795s" podCreationTimestamp="2026-01-29 16:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:22.45281909 +0000 UTC m=+148.973320240" watchObservedRunningTime="2026-01-29 16:12:22.468043795 +0000 UTC m=+148.988544915" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.470004 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-chzp2" podStartSLOduration=128.469989885 podStartE2EDuration="2m8.469989885s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:22.466998913 +0000 UTC m=+148.987500033" watchObservedRunningTime="2026-01-29 16:12:22.469989885 +0000 UTC m=+148.990491015" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.483374 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dwsm5" podStartSLOduration=128.483356713 podStartE2EDuration="2m8.483356713s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:22.481534108 +0000 UTC m=+149.002035228" watchObservedRunningTime="2026-01-29 16:12:22.483356713 +0000 UTC m=+149.003857833" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.496326 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-44gfk" podStartSLOduration=127.49631164 podStartE2EDuration="2m7.49631164s" podCreationTimestamp="2026-01-29 16:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:22.49502951 +0000 UTC m=+149.015530630" watchObservedRunningTime="2026-01-29 16:12:22.49631164 +0000 UTC m=+149.016812760" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.508875 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-vhtdt" podStartSLOduration=9.508853643 podStartE2EDuration="9.508853643s" podCreationTimestamp="2026-01-29 16:12:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:22.507551433 +0000 UTC m=+149.028052553" watchObservedRunningTime="2026-01-29 16:12:22.508853643 +0000 UTC m=+149.029354763" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.519124 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.524409 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.525312 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" podStartSLOduration=127.525292666 podStartE2EDuration="2m7.525292666s" podCreationTimestamp="2026-01-29 16:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:22.524025847 +0000 UTC m=+149.044526967" watchObservedRunningTime="2026-01-29 16:12:22.525292666 +0000 UTC m=+149.045793796" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.527411 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:22 crc kubenswrapper[4714]: E0129 16:12:22.527848 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:23.027812733 +0000 UTC m=+149.548313863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.552261 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sq9mx" podStartSLOduration=127.55224134 podStartE2EDuration="2m7.55224134s" podCreationTimestamp="2026-01-29 16:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:22.551034893 +0000 UTC m=+149.071536013" watchObservedRunningTime="2026-01-29 16:12:22.55224134 +0000 UTC m=+149.072742460" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.573414 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xvrxj" podStartSLOduration=128.573397327 podStartE2EDuration="2m8.573397327s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:22.572275802 +0000 UTC m=+149.092776922" watchObservedRunningTime="2026-01-29 16:12:22.573397327 +0000 UTC m=+149.093898457" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.599044 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.599919 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v68nn" podStartSLOduration=127.599908237 podStartE2EDuration="2m7.599908237s" podCreationTimestamp="2026-01-29 16:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:22.599077972 +0000 UTC m=+149.119579092" watchObservedRunningTime="2026-01-29 16:12:22.599908237 +0000 UTC m=+149.120409357" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.621195 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzg2m" podStartSLOduration=127.621177328 podStartE2EDuration="2m7.621177328s" podCreationTimestamp="2026-01-29 16:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:22.620821987 +0000 UTC m=+149.141323107" watchObservedRunningTime="2026-01-29 16:12:22.621177328 +0000 UTC m=+149.141678458" Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.629078 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:22 crc kubenswrapper[4714]: E0129 16:12:22.629366 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:23.129355698 +0000 UTC m=+149.649856818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.732727 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:22 crc kubenswrapper[4714]: E0129 16:12:22.733341 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:23.233325436 +0000 UTC m=+149.753826556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.750502 4714 patch_prober.go:28] interesting pod/router-default-5444994796-lz6mw container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.750556 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lz6mw" podUID="7a1dfb55-8680-4cbe-bd78-caca2e847caf" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 29 16:12:22 crc kubenswrapper[4714]: W0129 16:12:22.791208 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-2f2774f8508cbdbf6b13c5b366a676e0f00b06214da0c20f702f75031ddf6782 WatchSource:0}: Error finding container 2f2774f8508cbdbf6b13c5b366a676e0f00b06214da0c20f702f75031ddf6782: Status 404 returned error can't find the container with id 2f2774f8508cbdbf6b13c5b366a676e0f00b06214da0c20f702f75031ddf6782 Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.836480 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:22 crc kubenswrapper[4714]: E0129 16:12:22.836860 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:23.336847471 +0000 UTC m=+149.857348581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:22 crc kubenswrapper[4714]: W0129 16:12:22.893822 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-233de5020fb91da4466eb92d9dbe85fa38ae483b312d7f505f7c7f9fcc6c97ae WatchSource:0}: Error finding container 233de5020fb91da4466eb92d9dbe85fa38ae483b312d7f505f7c7f9fcc6c97ae: Status 404 returned error can't find the container with id 233de5020fb91da4466eb92d9dbe85fa38ae483b312d7f505f7c7f9fcc6c97ae Jan 29 16:12:22 crc kubenswrapper[4714]: I0129 16:12:22.937790 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:22 crc kubenswrapper[4714]: E0129 16:12:22.938166 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:23.438148109 +0000 UTC m=+149.958649229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.040370 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:23 crc kubenswrapper[4714]: E0129 16:12:23.040637 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:23.540625422 +0000 UTC m=+150.061126542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.141060 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:23 crc kubenswrapper[4714]: E0129 16:12:23.141247 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:23.641219498 +0000 UTC m=+150.161720628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.141321 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:23 crc kubenswrapper[4714]: E0129 16:12:23.141607 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:23.64159795 +0000 UTC m=+150.162099080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.242524 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:23 crc kubenswrapper[4714]: E0129 16:12:23.242708 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:23.74267722 +0000 UTC m=+150.263178350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.242784 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:23 crc kubenswrapper[4714]: E0129 16:12:23.243122 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:23.743110364 +0000 UTC m=+150.263611474 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.344458 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:23 crc kubenswrapper[4714]: E0129 16:12:23.344702 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:23.844671979 +0000 UTC m=+150.365173099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.345018 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:23 crc kubenswrapper[4714]: E0129 16:12:23.345403 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:23.845388671 +0000 UTC m=+150.365889801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.383147 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"233de5020fb91da4466eb92d9dbe85fa38ae483b312d7f505f7c7f9fcc6c97ae"} Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.384523 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"803bf00dc059eba74bdf6347eeb2d3bed51ce9ca6e5847b0b7980ffcfe2f8519"} Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.385843 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"31d4d8de5b9e812dda05d91762b6d773fa399a230b3ec3fce86d9285fe3fbdc3"} Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.385897 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"2f2774f8508cbdbf6b13c5b366a676e0f00b06214da0c20f702f75031ddf6782"} Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.388116 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6jl75" event={"ID":"99bab267-639b-48b1-abc4-8c0373200a39","Type":"ContainerStarted","Data":"16e55a14fb4aaaa9b4a2ff320e0057b8299dcd5719d4e77935929d031851eb87"} Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.390209 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xtzbx" event={"ID":"2fa1ede8-3ea3-421d-929d-f6bf9cc1db0e","Type":"ContainerStarted","Data":"c5f70c46d50b5294e7f36701d45692d9d1f41d9d50460dd007c421dba9653d17"} Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.392524 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zkbcz" event={"ID":"f18250a8-66c1-445d-9452-081de13b24f7","Type":"ContainerStarted","Data":"c5f98c48f6813667c5fbd018311042683d511aa37bfde013ffde64453733828b"} Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.395072 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jcdhl" event={"ID":"fcaee576-dff0-4a67-a0b1-7347b3030729","Type":"ContainerStarted","Data":"464ad6a3b23bf3d892319405739fd7bdedc4a42ece626b872203487f1c929e1a"} Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.395199 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-jcdhl" Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.400459 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbrmk" event={"ID":"adc3900b-dce0-4da4-bfc2-bca85b2395b2","Type":"ContainerStarted","Data":"db7e47982d58f7123b5f33d45571e2ab833e4d49905865e79880a686399644fb"} Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.400510 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbrmk" Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.400882 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dwsm5" Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.403304 4714 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-v68nn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.403360 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v68nn" podUID="2fbdedcd-e0ce-4fe9-b19a-53d422b1fcab" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.408731 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xtzbx" podStartSLOduration=128.408715227 podStartE2EDuration="2m8.408715227s" podCreationTimestamp="2026-01-29 16:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:23.408234873 +0000 UTC m=+149.928736013" watchObservedRunningTime="2026-01-29 16:12:23.408715227 +0000 UTC m=+149.929216357" Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.433683 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nf7jb" podStartSLOduration=129.43366403 podStartE2EDuration="2m9.43366403s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:23.430810183 +0000 UTC m=+149.951311303" watchObservedRunningTime="2026-01-29 16:12:23.43366403 +0000 UTC m=+149.954165170" Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.446617 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:23 crc kubenswrapper[4714]: E0129 16:12:23.447268 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:23.947246585 +0000 UTC m=+150.467747715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.447925 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:23 crc kubenswrapper[4714]: E0129 16:12:23.450005 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:23.949988949 +0000 UTC m=+150.470490079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.489047 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-sv7xw" podStartSLOduration=129.489029753 podStartE2EDuration="2m9.489029753s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:23.463825622 +0000 UTC m=+149.984326742" watchObservedRunningTime="2026-01-29 16:12:23.489029753 +0000 UTC m=+150.009530873" Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.490883 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-jcdhl" podStartSLOduration=10.490874629 podStartE2EDuration="10.490874629s" podCreationTimestamp="2026-01-29 16:12:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:23.488412144 +0000 UTC m=+150.008913264" watchObservedRunningTime="2026-01-29 16:12:23.490874629 +0000 UTC m=+150.011375749" Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.506976 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r6cxt" podStartSLOduration=129.506948851 podStartE2EDuration="2m9.506948851s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:23.504730593 +0000 UTC m=+150.025231713" watchObservedRunningTime="2026-01-29 16:12:23.506948851 +0000 UTC m=+150.027449971" Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.529224 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-99knh" podStartSLOduration=129.529208522 podStartE2EDuration="2m9.529208522s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:23.525700324 +0000 UTC m=+150.046201444" watchObservedRunningTime="2026-01-29 16:12:23.529208522 +0000 UTC m=+150.049709642" Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.548973 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:23 crc kubenswrapper[4714]: E0129 16:12:23.549308 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:24.049292736 +0000 UTC m=+150.569793856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.559617 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-zkbcz" podStartSLOduration=128.559603881 podStartE2EDuration="2m8.559603881s" podCreationTimestamp="2026-01-29 16:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:23.558321272 +0000 UTC m=+150.078822392" watchObservedRunningTime="2026-01-29 16:12:23.559603881 +0000 UTC m=+150.080105001" Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.650450 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:23 crc kubenswrapper[4714]: E0129 16:12:23.650832 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:24.15081544 +0000 UTC m=+150.671316560 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.740990 4714 patch_prober.go:28] interesting pod/router-default-5444994796-lz6mw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:12:23 crc kubenswrapper[4714]: [-]has-synced failed: reason withheld Jan 29 16:12:23 crc kubenswrapper[4714]: [+]process-running ok Jan 29 16:12:23 crc kubenswrapper[4714]: healthz check failed Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.741064 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lz6mw" podUID="7a1dfb55-8680-4cbe-bd78-caca2e847caf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.751824 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:23 crc kubenswrapper[4714]: E0129 16:12:23.752177 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:24.252145328 +0000 UTC m=+150.772646448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.752262 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:23 crc kubenswrapper[4714]: E0129 16:12:23.752643 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:24.252630823 +0000 UTC m=+150.773131943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.853263 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:23 crc kubenswrapper[4714]: E0129 16:12:23.853412 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:24.353391584 +0000 UTC m=+150.873892704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.853448 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:23 crc kubenswrapper[4714]: E0129 16:12:23.853759 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:24.353751475 +0000 UTC m=+150.874252595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.954531 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:23 crc kubenswrapper[4714]: E0129 16:12:23.954720 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:24.454698442 +0000 UTC m=+150.975199562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:23 crc kubenswrapper[4714]: I0129 16:12:23.954780 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:23 crc kubenswrapper[4714]: E0129 16:12:23.955118 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:24.455111074 +0000 UTC m=+150.975612194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.055809 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:24 crc kubenswrapper[4714]: E0129 16:12:24.056034 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:24.5560084 +0000 UTC m=+151.076509520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.056420 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:24 crc kubenswrapper[4714]: E0129 16:12:24.056719 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:24.556709811 +0000 UTC m=+151.077210931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.157324 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:24 crc kubenswrapper[4714]: E0129 16:12:24.157464 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:24.657440401 +0000 UTC m=+151.177941521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.157511 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:24 crc kubenswrapper[4714]: E0129 16:12:24.157842 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:24.657827893 +0000 UTC m=+151.178329053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.205736 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbrmk" podStartSLOduration=129.205700887 podStartE2EDuration="2m9.205700887s" podCreationTimestamp="2026-01-29 16:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:23.578906611 +0000 UTC m=+150.099407731" watchObservedRunningTime="2026-01-29 16:12:24.205700887 +0000 UTC m=+150.726202007" Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.259100 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:24 crc kubenswrapper[4714]: E0129 16:12:24.259282 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:24.759252914 +0000 UTC m=+151.279754034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.259368 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:24 crc kubenswrapper[4714]: E0129 16:12:24.259730 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:24.759720389 +0000 UTC m=+151.280221589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.360339 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:24 crc kubenswrapper[4714]: E0129 16:12:24.360571 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:24.860522791 +0000 UTC m=+151.381023921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.360647 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:24 crc kubenswrapper[4714]: E0129 16:12:24.361234 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:24.861219562 +0000 UTC m=+151.381720692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.376505 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.378663 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.385586 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.385873 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.390772 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.426519 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"7c90fd8a3db23778d0ce37dffb6807206e355df3febeeeab78212b0d154f7f50"} Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.429203 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"59f2ceae6c23231edc9f1b9ef386c9a7f144a70e49fa01817a311892266720b9"} Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.452083 4714 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-dwsm5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.452146 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dwsm5" podUID="8f71ba3e-c687-4ff7-9475-1e18ded764f6" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.463409 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.463690 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fee5819c-8349-4080-9922-453f31a300da-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fee5819c-8349-4080-9922-453f31a300da\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.463878 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fee5819c-8349-4080-9922-453f31a300da-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fee5819c-8349-4080-9922-453f31a300da\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:12:24 crc kubenswrapper[4714]: E0129 16:12:24.465185 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:24.96516058 +0000 UTC m=+151.485661730 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.499439 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-6jl75" podStartSLOduration=130.499421798 podStartE2EDuration="2m10.499421798s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:24.497885391 +0000 UTC m=+151.018386511" watchObservedRunningTime="2026-01-29 16:12:24.499421798 +0000 UTC m=+151.019922918" Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.565150 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.565221 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fee5819c-8349-4080-9922-453f31a300da-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fee5819c-8349-4080-9922-453f31a300da\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.565290 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fee5819c-8349-4080-9922-453f31a300da-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fee5819c-8349-4080-9922-453f31a300da\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:12:24 crc kubenswrapper[4714]: E0129 16:12:24.565865 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:25.065849619 +0000 UTC m=+151.586350749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.566068 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fee5819c-8349-4080-9922-453f31a300da-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fee5819c-8349-4080-9922-453f31a300da\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.608786 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fee5819c-8349-4080-9922-453f31a300da-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fee5819c-8349-4080-9922-453f31a300da\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.666174 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:24 crc kubenswrapper[4714]: E0129 16:12:24.666612 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:25.16659361 +0000 UTC m=+151.687094730 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.706105 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.741239 4714 patch_prober.go:28] interesting pod/router-default-5444994796-lz6mw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:12:24 crc kubenswrapper[4714]: [-]has-synced failed: reason withheld Jan 29 16:12:24 crc kubenswrapper[4714]: [+]process-running ok Jan 29 16:12:24 crc kubenswrapper[4714]: healthz check failed Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.741284 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lz6mw" podUID="7a1dfb55-8680-4cbe-bd78-caca2e847caf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.768161 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:24 crc kubenswrapper[4714]: E0129 16:12:24.768436 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:25.268425353 +0000 UTC m=+151.788926473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.778058 4714 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-dwsm5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.778096 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dwsm5" podUID="8f71ba3e-c687-4ff7-9475-1e18ded764f6" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.778132 4714 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-dwsm5 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.778195 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dwsm5" podUID="8f71ba3e-c687-4ff7-9475-1e18ded764f6" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.869723 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:24 crc kubenswrapper[4714]: E0129 16:12:24.870091 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:25.370076581 +0000 UTC m=+151.890577701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:24 crc kubenswrapper[4714]: I0129 16:12:24.971248 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:24 crc kubenswrapper[4714]: E0129 16:12:24.971580 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:25.471568875 +0000 UTC m=+151.992069995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.021523 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.072239 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:25 crc kubenswrapper[4714]: E0129 16:12:25.072610 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:25.572594774 +0000 UTC m=+152.093095894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.173795 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:25 crc kubenswrapper[4714]: E0129 16:12:25.174253 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:25.674239152 +0000 UTC m=+152.194740272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.275367 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:25 crc kubenswrapper[4714]: E0129 16:12:25.275536 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:25.775507858 +0000 UTC m=+152.296008978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.275829 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:25 crc kubenswrapper[4714]: E0129 16:12:25.276146 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:25.776136668 +0000 UTC m=+152.296637858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.377158 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:25 crc kubenswrapper[4714]: E0129 16:12:25.377433 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:25.877419275 +0000 UTC m=+152.397920395 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.433503 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4nghl" event={"ID":"714cef39-2960-4a25-ac81-a4e65a115eb3","Type":"ContainerStarted","Data":"94e108c84782e87694fda57ab7922c324fb5c9f04546542dd16572b2af15e6d5"} Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.435112 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fee5819c-8349-4080-9922-453f31a300da","Type":"ContainerStarted","Data":"aabce909ed74e646558bee9fa02988139c96b4993b0d41ddfcd7351dba0ff624"} Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.478794 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:25 crc kubenswrapper[4714]: E0129 16:12:25.479411 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:25.979395633 +0000 UTC m=+152.499896753 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.511702 4714 csr.go:261] certificate signing request csr-hc25h is approved, waiting to be issued Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.519171 4714 csr.go:257] certificate signing request csr-hc25h is issued Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.579531 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:25 crc kubenswrapper[4714]: E0129 16:12:25.579714 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:26.079656508 +0000 UTC m=+152.600157628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.579770 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:25 crc kubenswrapper[4714]: E0129 16:12:25.580095 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:26.080087582 +0000 UTC m=+152.600588702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.607982 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-kvp9d" Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.612979 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-kvp9d" Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.680650 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:25 crc kubenswrapper[4714]: E0129 16:12:25.680862 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:26.180841862 +0000 UTC m=+152.701342982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.680911 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.680967 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.681140 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:25 crc kubenswrapper[4714]: E0129 16:12:25.681459 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:26.181450301 +0000 UTC m=+152.701951421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.682665 4714 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-kgl5s container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.682716 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" podUID="c779f8ba-7614-49f1-be6d-a9e316ec59ba" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.716927 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-m2g9h" Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.716997 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-m2g9h" Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.718890 4714 patch_prober.go:28] interesting pod/console-f9d7485db-m2g9h container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.718943 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-m2g9h" podUID="0e2a789d-6a90-4d60-881e-9562cd92e0a7" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.741588 4714 patch_prober.go:28] interesting pod/router-default-5444994796-lz6mw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:12:25 crc kubenswrapper[4714]: [-]has-synced failed: reason withheld Jan 29 16:12:25 crc kubenswrapper[4714]: [+]process-running ok Jan 29 16:12:25 crc kubenswrapper[4714]: healthz check failed Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.741643 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lz6mw" podUID="7a1dfb55-8680-4cbe-bd78-caca2e847caf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.782102 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:25 crc kubenswrapper[4714]: E0129 16:12:25.782285 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:26.282267034 +0000 UTC m=+152.802768154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.782344 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:25 crc kubenswrapper[4714]: E0129 16:12:25.782781 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:26.282764579 +0000 UTC m=+152.803265699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.795250 4714 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn75b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.795260 4714 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn75b container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.795354 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fn75b" podUID="42b66dc3-a385-4350-a943-50f062da35f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.795309 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fn75b" podUID="42b66dc3-a385-4350-a943-50f062da35f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.877413 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.884660 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:25 crc kubenswrapper[4714]: E0129 16:12:25.885003 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:26.384988925 +0000 UTC m=+152.905490045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.910347 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw" Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.920806 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw" Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.986476 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:25 crc kubenswrapper[4714]: E0129 16:12:25.987449 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:26.487432777 +0000 UTC m=+153.007933897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.992224 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.992261 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.993173 4714 patch_prober.go:28] interesting pod/apiserver-76f77b778f-6jl75 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 29 16:12:25 crc kubenswrapper[4714]: I0129 16:12:25.993210 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-6jl75" podUID="99bab267-639b-48b1-abc4-8c0373200a39" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.087393 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:26 crc kubenswrapper[4714]: E0129 16:12:26.087560 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:26.587536368 +0000 UTC m=+153.108037488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.087635 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:26 crc kubenswrapper[4714]: E0129 16:12:26.088073 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:26.588053764 +0000 UTC m=+153.108554884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.101786 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.110982 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.188726 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:26 crc kubenswrapper[4714]: E0129 16:12:26.188898 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:26.688862066 +0000 UTC m=+153.209363186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.189074 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:26 crc kubenswrapper[4714]: E0129 16:12:26.190271 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:26.690257999 +0000 UTC m=+153.210759119 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.289895 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:26 crc kubenswrapper[4714]: E0129 16:12:26.290090 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:26.79006323 +0000 UTC m=+153.310564350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.290155 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:26 crc kubenswrapper[4714]: E0129 16:12:26.290490 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:26.790482692 +0000 UTC m=+153.310983812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.391400 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:26 crc kubenswrapper[4714]: E0129 16:12:26.391590 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:26.891559763 +0000 UTC m=+153.412060883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.391638 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:26 crc kubenswrapper[4714]: E0129 16:12:26.391966 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:26.891952565 +0000 UTC m=+153.412453685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.442484 4714 generic.go:334] "Generic (PLEG): container finished" podID="cb979c55-3027-4d92-94b9-cd17c32e6331" containerID="8002880bca0bfedb17cd3285afc46c8ef8627aa8d830b5e5901fd7ddfd3b1e33" exitCode=0 Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.442571 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5mkf8" event={"ID":"cb979c55-3027-4d92-94b9-cd17c32e6331","Type":"ContainerDied","Data":"8002880bca0bfedb17cd3285afc46c8ef8627aa8d830b5e5901fd7ddfd3b1e33"} Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.445182 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fee5819c-8349-4080-9922-453f31a300da","Type":"ContainerStarted","Data":"35db7973ed160da110c671870acde50bc58a4cf360b02511f01c25b5e19a9de9"} Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.489747 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.489727385 podStartE2EDuration="2.489727385s" podCreationTimestamp="2026-01-29 16:12:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:26.488850638 +0000 UTC m=+153.009351758" watchObservedRunningTime="2026-01-29 16:12:26.489727385 +0000 UTC m=+153.010228505" Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.492564 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:26 crc kubenswrapper[4714]: E0129 16:12:26.492890 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:26.99285198 +0000 UTC m=+153.513353100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.492989 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:26 crc kubenswrapper[4714]: E0129 16:12:26.494251 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:26.994234993 +0000 UTC m=+153.514736113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.513579 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzg2m" Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.519768 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jzg2m" Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.520083 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-29 16:07:25 +0000 UTC, rotation deadline is 2026-10-30 04:21:57.88643458 +0000 UTC Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.520133 4714 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6564h9m31.366304766s for next certificate rotation Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.594153 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:26 crc kubenswrapper[4714]: E0129 16:12:26.595333 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:27.095313483 +0000 UTC m=+153.615814603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.695858 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:26 crc kubenswrapper[4714]: E0129 16:12:26.696310 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:27.196287921 +0000 UTC m=+153.716789041 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.738018 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-lz6mw" Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.744007 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xtr82"] Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.745180 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xtr82" Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.745891 4714 patch_prober.go:28] interesting pod/router-default-5444994796-lz6mw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:12:26 crc kubenswrapper[4714]: [-]has-synced failed: reason withheld Jan 29 16:12:26 crc kubenswrapper[4714]: [+]process-running ok Jan 29 16:12:26 crc kubenswrapper[4714]: healthz check failed Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.745962 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lz6mw" podUID="7a1dfb55-8680-4cbe-bd78-caca2e847caf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.758852 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.791358 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-l2t56" Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.797456 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:26 crc kubenswrapper[4714]: E0129 16:12:26.797653 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:27.297621969 +0000 UTC m=+153.818123089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.797709 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbn22\" (UniqueName: \"kubernetes.io/projected/11a30de8-b234-47b4-8fd0-44f0c428be78-kube-api-access-zbn22\") pod \"community-operators-xtr82\" (UID: \"11a30de8-b234-47b4-8fd0-44f0c428be78\") " pod="openshift-marketplace/community-operators-xtr82" Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.797814 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11a30de8-b234-47b4-8fd0-44f0c428be78-utilities\") pod \"community-operators-xtr82\" (UID: \"11a30de8-b234-47b4-8fd0-44f0c428be78\") " pod="openshift-marketplace/community-operators-xtr82" Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.798057 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.798249 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11a30de8-b234-47b4-8fd0-44f0c428be78-catalog-content\") pod \"community-operators-xtr82\" (UID: \"11a30de8-b234-47b4-8fd0-44f0c428be78\") " pod="openshift-marketplace/community-operators-xtr82" Jan 29 16:12:26 crc kubenswrapper[4714]: E0129 16:12:26.798455 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:27.298440334 +0000 UTC m=+153.818941454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.815373 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xtr82"] Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.895685 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ljnh7" Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.899976 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.900340 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11a30de8-b234-47b4-8fd0-44f0c428be78-catalog-content\") pod \"community-operators-xtr82\" (UID: \"11a30de8-b234-47b4-8fd0-44f0c428be78\") " pod="openshift-marketplace/community-operators-xtr82" Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.900373 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbn22\" (UniqueName: \"kubernetes.io/projected/11a30de8-b234-47b4-8fd0-44f0c428be78-kube-api-access-zbn22\") pod \"community-operators-xtr82\" (UID: \"11a30de8-b234-47b4-8fd0-44f0c428be78\") " pod="openshift-marketplace/community-operators-xtr82" Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.900403 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11a30de8-b234-47b4-8fd0-44f0c428be78-utilities\") pod \"community-operators-xtr82\" (UID: \"11a30de8-b234-47b4-8fd0-44f0c428be78\") " pod="openshift-marketplace/community-operators-xtr82" Jan 29 16:12:26 crc kubenswrapper[4714]: E0129 16:12:26.901428 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:27.401410383 +0000 UTC m=+153.921911503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.901918 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11a30de8-b234-47b4-8fd0-44f0c428be78-catalog-content\") pod \"community-operators-xtr82\" (UID: \"11a30de8-b234-47b4-8fd0-44f0c428be78\") " pod="openshift-marketplace/community-operators-xtr82" Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.902068 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11a30de8-b234-47b4-8fd0-44f0c428be78-utilities\") pod \"community-operators-xtr82\" (UID: \"11a30de8-b234-47b4-8fd0-44f0c428be78\") " pod="openshift-marketplace/community-operators-xtr82" Jan 29 16:12:26 crc kubenswrapper[4714]: I0129 16:12:26.910327 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ljnh7" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.002656 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:27 crc kubenswrapper[4714]: E0129 16:12:27.003313 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:27.503302038 +0000 UTC m=+154.023803158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.005759 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbn22\" (UniqueName: \"kubernetes.io/projected/11a30de8-b234-47b4-8fd0-44f0c428be78-kube-api-access-zbn22\") pod \"community-operators-xtr82\" (UID: \"11a30de8-b234-47b4-8fd0-44f0c428be78\") " pod="openshift-marketplace/community-operators-xtr82" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.023039 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-74twj"] Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.023904 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-74twj" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.040589 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.056736 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.058012 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.061252 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xtr82" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.085578 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.092256 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.110565 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-74twj"] Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.113399 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.113670 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a97ed1ff-657f-4bde-943b-78caf9d07f92-utilities\") pod \"certified-operators-74twj\" (UID: \"a97ed1ff-657f-4bde-943b-78caf9d07f92\") " pod="openshift-marketplace/certified-operators-74twj" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.113709 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.113786 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.113803 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8bfv\" (UniqueName: \"kubernetes.io/projected/a97ed1ff-657f-4bde-943b-78caf9d07f92-kube-api-access-v8bfv\") pod \"certified-operators-74twj\" (UID: \"a97ed1ff-657f-4bde-943b-78caf9d07f92\") " pod="openshift-marketplace/certified-operators-74twj" Jan 29 16:12:27 crc kubenswrapper[4714]: E0129 16:12:27.113867 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:27.613836838 +0000 UTC m=+154.134337958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.113985 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a97ed1ff-657f-4bde-943b-78caf9d07f92-catalog-content\") pod \"certified-operators-74twj\" (UID: \"a97ed1ff-657f-4bde-943b-78caf9d07f92\") " pod="openshift-marketplace/certified-operators-74twj" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.145669 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.166368 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v68nn" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.168607 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6bjgq"] Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.169867 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6bjgq" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.213980 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6bjgq"] Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.215038 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.215100 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98a35d03-ef3b-4341-9866-56d12a28aee3-utilities\") pod \"community-operators-6bjgq\" (UID: \"98a35d03-ef3b-4341-9866-56d12a28aee3\") " pod="openshift-marketplace/community-operators-6bjgq" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.215162 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.215188 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gks4z\" (UniqueName: \"kubernetes.io/projected/98a35d03-ef3b-4341-9866-56d12a28aee3-kube-api-access-gks4z\") pod \"community-operators-6bjgq\" (UID: \"98a35d03-ef3b-4341-9866-56d12a28aee3\") " pod="openshift-marketplace/community-operators-6bjgq" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.215213 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8bfv\" (UniqueName: \"kubernetes.io/projected/a97ed1ff-657f-4bde-943b-78caf9d07f92-kube-api-access-v8bfv\") pod \"certified-operators-74twj\" (UID: \"a97ed1ff-657f-4bde-943b-78caf9d07f92\") " pod="openshift-marketplace/certified-operators-74twj" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.215252 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a97ed1ff-657f-4bde-943b-78caf9d07f92-catalog-content\") pod \"certified-operators-74twj\" (UID: \"a97ed1ff-657f-4bde-943b-78caf9d07f92\") " pod="openshift-marketplace/certified-operators-74twj" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.215277 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a97ed1ff-657f-4bde-943b-78caf9d07f92-utilities\") pod \"certified-operators-74twj\" (UID: \"a97ed1ff-657f-4bde-943b-78caf9d07f92\") " pod="openshift-marketplace/certified-operators-74twj" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.215298 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98a35d03-ef3b-4341-9866-56d12a28aee3-catalog-content\") pod \"community-operators-6bjgq\" (UID: \"98a35d03-ef3b-4341-9866-56d12a28aee3\") " pod="openshift-marketplace/community-operators-6bjgq" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.215328 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:12:27 crc kubenswrapper[4714]: E0129 16:12:27.216116 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:27.716102785 +0000 UTC m=+154.236603905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.216563 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.216850 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a97ed1ff-657f-4bde-943b-78caf9d07f92-catalog-content\") pod \"certified-operators-74twj\" (UID: \"a97ed1ff-657f-4bde-943b-78caf9d07f92\") " pod="openshift-marketplace/certified-operators-74twj" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.216918 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a97ed1ff-657f-4bde-943b-78caf9d07f92-utilities\") pod \"certified-operators-74twj\" (UID: \"a97ed1ff-657f-4bde-943b-78caf9d07f92\") " pod="openshift-marketplace/certified-operators-74twj" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.264895 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.294916 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8bfv\" (UniqueName: \"kubernetes.io/projected/a97ed1ff-657f-4bde-943b-78caf9d07f92-kube-api-access-v8bfv\") pod \"certified-operators-74twj\" (UID: \"a97ed1ff-657f-4bde-943b-78caf9d07f92\") " pod="openshift-marketplace/certified-operators-74twj" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.316443 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.316806 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gks4z\" (UniqueName: \"kubernetes.io/projected/98a35d03-ef3b-4341-9866-56d12a28aee3-kube-api-access-gks4z\") pod \"community-operators-6bjgq\" (UID: \"98a35d03-ef3b-4341-9866-56d12a28aee3\") " pod="openshift-marketplace/community-operators-6bjgq" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.316911 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98a35d03-ef3b-4341-9866-56d12a28aee3-catalog-content\") pod \"community-operators-6bjgq\" (UID: \"98a35d03-ef3b-4341-9866-56d12a28aee3\") " pod="openshift-marketplace/community-operators-6bjgq" Jan 29 16:12:27 crc kubenswrapper[4714]: E0129 16:12:27.317101 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:27.817062102 +0000 UTC m=+154.337563222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.317245 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.317363 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98a35d03-ef3b-4341-9866-56d12a28aee3-utilities\") pod \"community-operators-6bjgq\" (UID: \"98a35d03-ef3b-4341-9866-56d12a28aee3\") " pod="openshift-marketplace/community-operators-6bjgq" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.318876 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98a35d03-ef3b-4341-9866-56d12a28aee3-catalog-content\") pod \"community-operators-6bjgq\" (UID: \"98a35d03-ef3b-4341-9866-56d12a28aee3\") " pod="openshift-marketplace/community-operators-6bjgq" Jan 29 16:12:27 crc kubenswrapper[4714]: E0129 16:12:27.319323 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:27.819304071 +0000 UTC m=+154.339805281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.319518 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98a35d03-ef3b-4341-9866-56d12a28aee3-utilities\") pod \"community-operators-6bjgq\" (UID: \"98a35d03-ef3b-4341-9866-56d12a28aee3\") " pod="openshift-marketplace/community-operators-6bjgq" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.339421 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-74twj" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.364249 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dbvgp"] Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.364611 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gks4z\" (UniqueName: \"kubernetes.io/projected/98a35d03-ef3b-4341-9866-56d12a28aee3-kube-api-access-gks4z\") pod \"community-operators-6bjgq\" (UID: \"98a35d03-ef3b-4341-9866-56d12a28aee3\") " pod="openshift-marketplace/community-operators-6bjgq" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.365450 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dbvgp" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.377193 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.401547 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dbvgp"] Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.421048 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.421214 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef-utilities\") pod \"certified-operators-dbvgp\" (UID: \"ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef\") " pod="openshift-marketplace/certified-operators-dbvgp" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.421234 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef-catalog-content\") pod \"certified-operators-dbvgp\" (UID: \"ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef\") " pod="openshift-marketplace/certified-operators-dbvgp" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.421318 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dv2v\" (UniqueName: \"kubernetes.io/projected/ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef-kube-api-access-9dv2v\") pod \"certified-operators-dbvgp\" (UID: \"ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef\") " pod="openshift-marketplace/certified-operators-dbvgp" Jan 29 16:12:27 crc kubenswrapper[4714]: E0129 16:12:27.421443 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:27.921428523 +0000 UTC m=+154.441929643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.461054 4714 generic.go:334] "Generic (PLEG): container finished" podID="fee5819c-8349-4080-9922-453f31a300da" containerID="35db7973ed160da110c671870acde50bc58a4cf360b02511f01c25b5e19a9de9" exitCode=0 Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.461513 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fee5819c-8349-4080-9922-453f31a300da","Type":"ContainerDied","Data":"35db7973ed160da110c671870acde50bc58a4cf360b02511f01c25b5e19a9de9"} Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.522438 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dv2v\" (UniqueName: \"kubernetes.io/projected/ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef-kube-api-access-9dv2v\") pod \"certified-operators-dbvgp\" (UID: \"ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef\") " pod="openshift-marketplace/certified-operators-dbvgp" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.522554 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef-utilities\") pod \"certified-operators-dbvgp\" (UID: \"ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef\") " pod="openshift-marketplace/certified-operators-dbvgp" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.522578 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef-catalog-content\") pod \"certified-operators-dbvgp\" (UID: \"ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef\") " pod="openshift-marketplace/certified-operators-dbvgp" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.522623 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:27 crc kubenswrapper[4714]: E0129 16:12:27.523575 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:28.023560066 +0000 UTC m=+154.544061186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.523589 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef-catalog-content\") pod \"certified-operators-dbvgp\" (UID: \"ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef\") " pod="openshift-marketplace/certified-operators-dbvgp" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.523980 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef-utilities\") pod \"certified-operators-dbvgp\" (UID: \"ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef\") " pod="openshift-marketplace/certified-operators-dbvgp" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.526903 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6bjgq" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.550445 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dv2v\" (UniqueName: \"kubernetes.io/projected/ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef-kube-api-access-9dv2v\") pod \"certified-operators-dbvgp\" (UID: \"ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef\") " pod="openshift-marketplace/certified-operators-dbvgp" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.623432 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:27 crc kubenswrapper[4714]: E0129 16:12:27.623855 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:28.1237673 +0000 UTC m=+154.644268420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.700226 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dbvgp" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.710063 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xtr82"] Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.724408 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:27 crc kubenswrapper[4714]: E0129 16:12:27.724797 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:28.224785489 +0000 UTC m=+154.745286609 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.748109 4714 patch_prober.go:28] interesting pod/router-default-5444994796-lz6mw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:12:27 crc kubenswrapper[4714]: [-]has-synced failed: reason withheld Jan 29 16:12:27 crc kubenswrapper[4714]: [+]process-running ok Jan 29 16:12:27 crc kubenswrapper[4714]: healthz check failed Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.748154 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lz6mw" podUID="7a1dfb55-8680-4cbe-bd78-caca2e847caf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:12:27 crc kubenswrapper[4714]: W0129 16:12:27.777486 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11a30de8_b234_47b4_8fd0_44f0c428be78.slice/crio-d1e11cf94d1ae7d280d25746da20bca8871b9f9c8323efe87d1cfb324504d7a1 WatchSource:0}: Error finding container d1e11cf94d1ae7d280d25746da20bca8871b9f9c8323efe87d1cfb324504d7a1: Status 404 returned error can't find the container with id d1e11cf94d1ae7d280d25746da20bca8871b9f9c8323efe87d1cfb324504d7a1 Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.791672 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dwsm5" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.827493 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:27 crc kubenswrapper[4714]: E0129 16:12:27.828106 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:28.328078748 +0000 UTC m=+154.848579868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.853350 4714 patch_prober.go:28] interesting pod/machine-config-daemon-ppngk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.853394 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.876203 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-74twj"] Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.901193 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5mkf8" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.928752 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb979c55-3027-4d92-94b9-cd17c32e6331-config-volume\") pod \"cb979c55-3027-4d92-94b9-cd17c32e6331\" (UID: \"cb979c55-3027-4d92-94b9-cd17c32e6331\") " Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.928832 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fzjx\" (UniqueName: \"kubernetes.io/projected/cb979c55-3027-4d92-94b9-cd17c32e6331-kube-api-access-7fzjx\") pod \"cb979c55-3027-4d92-94b9-cd17c32e6331\" (UID: \"cb979c55-3027-4d92-94b9-cd17c32e6331\") " Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.928969 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb979c55-3027-4d92-94b9-cd17c32e6331-secret-volume\") pod \"cb979c55-3027-4d92-94b9-cd17c32e6331\" (UID: \"cb979c55-3027-4d92-94b9-cd17c32e6331\") " Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.929112 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:27 crc kubenswrapper[4714]: E0129 16:12:27.929437 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:28.429426136 +0000 UTC m=+154.949927256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.929597 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb979c55-3027-4d92-94b9-cd17c32e6331-config-volume" (OuterVolumeSpecName: "config-volume") pod "cb979c55-3027-4d92-94b9-cd17c32e6331" (UID: "cb979c55-3027-4d92-94b9-cd17c32e6331"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.940080 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb979c55-3027-4d92-94b9-cd17c32e6331-kube-api-access-7fzjx" (OuterVolumeSpecName: "kube-api-access-7fzjx") pod "cb979c55-3027-4d92-94b9-cd17c32e6331" (UID: "cb979c55-3027-4d92-94b9-cd17c32e6331"). InnerVolumeSpecName "kube-api-access-7fzjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:27 crc kubenswrapper[4714]: I0129 16:12:27.946248 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb979c55-3027-4d92-94b9-cd17c32e6331-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cb979c55-3027-4d92-94b9-cd17c32e6331" (UID: "cb979c55-3027-4d92-94b9-cd17c32e6331"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.030219 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.030786 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:28 crc kubenswrapper[4714]: E0129 16:12:28.030980 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:28.53093029 +0000 UTC m=+155.051431420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.031152 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.031253 4714 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb979c55-3027-4d92-94b9-cd17c32e6331-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.031265 4714 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb979c55-3027-4d92-94b9-cd17c32e6331-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.031276 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fzjx\" (UniqueName: \"kubernetes.io/projected/cb979c55-3027-4d92-94b9-cd17c32e6331-kube-api-access-7fzjx\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:28 crc kubenswrapper[4714]: E0129 16:12:28.031563 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:28.531547649 +0000 UTC m=+155.052048769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.138398 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:28 crc kubenswrapper[4714]: E0129 16:12:28.138729 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:28.638715396 +0000 UTC m=+155.159216516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.240038 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:28 crc kubenswrapper[4714]: E0129 16:12:28.240338 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:28.740326923 +0000 UTC m=+155.260828033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.276186 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dbvgp"] Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.338536 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6bjgq"] Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.341062 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:28 crc kubenswrapper[4714]: E0129 16:12:28.341430 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:28.841414444 +0000 UTC m=+155.361915564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.442295 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:28 crc kubenswrapper[4714]: E0129 16:12:28.442695 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:28.94267872 +0000 UTC m=+155.463179840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.466345 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6bjgq" event={"ID":"98a35d03-ef3b-4341-9866-56d12a28aee3","Type":"ContainerStarted","Data":"d38f58d434dcb4497833c894d88b3ceb4be10c7a4d69f2a5403bda7aa069a88c"} Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.472144 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74twj" event={"ID":"a97ed1ff-657f-4bde-943b-78caf9d07f92","Type":"ContainerStarted","Data":"27e514e7925336355503e562c2b866089bbb8f20f6235853c55635bfeebcfe8c"} Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.473473 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbvgp" event={"ID":"ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef","Type":"ContainerStarted","Data":"8245ba1ef35303ce5087b4ec9f0268e726a450c3c8f0f72042d1655209fffe8b"} Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.474926 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xtr82" event={"ID":"11a30de8-b234-47b4-8fd0-44f0c428be78","Type":"ContainerStarted","Data":"d1e11cf94d1ae7d280d25746da20bca8871b9f9c8323efe87d1cfb324504d7a1"} Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.475899 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58","Type":"ContainerStarted","Data":"787dde46e1e979113da67d8494483beb2242c954a937435fdd6e95f1ea828572"} Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.477700 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5mkf8" Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.478196 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5mkf8" event={"ID":"cb979c55-3027-4d92-94b9-cd17c32e6331","Type":"ContainerDied","Data":"64dd0aa019d2ff78b021ae7336dcb6b797f2fc222a74d3ec8b1b6722bf60513e"} Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.478214 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64dd0aa019d2ff78b021ae7336dcb6b797f2fc222a74d3ec8b1b6722bf60513e" Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.542990 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:28 crc kubenswrapper[4714]: E0129 16:12:28.543227 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:29.043200794 +0000 UTC m=+155.563701914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.543280 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:28 crc kubenswrapper[4714]: E0129 16:12:28.543582 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:29.043569315 +0000 UTC m=+155.564070435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.644889 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:28 crc kubenswrapper[4714]: E0129 16:12:28.645079 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:29.145054518 +0000 UTC m=+155.665555638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.645115 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:28 crc kubenswrapper[4714]: E0129 16:12:28.645441 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:29.14542858 +0000 UTC m=+155.665929690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.745770 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:28 crc kubenswrapper[4714]: E0129 16:12:28.745993 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:29.245964724 +0000 UTC m=+155.766465844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.746114 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:28 crc kubenswrapper[4714]: E0129 16:12:28.746382 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:29.246371406 +0000 UTC m=+155.766872526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.746781 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nssrv"] Jan 29 16:12:28 crc kubenswrapper[4714]: E0129 16:12:28.746996 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb979c55-3027-4d92-94b9-cd17c32e6331" containerName="collect-profiles" Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.747009 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb979c55-3027-4d92-94b9-cd17c32e6331" containerName="collect-profiles" Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.747103 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb979c55-3027-4d92-94b9-cd17c32e6331" containerName="collect-profiles" Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.747749 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nssrv" Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.749815 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.759635 4714 patch_prober.go:28] interesting pod/router-default-5444994796-lz6mw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:12:28 crc kubenswrapper[4714]: [-]has-synced failed: reason withheld Jan 29 16:12:28 crc kubenswrapper[4714]: [+]process-running ok Jan 29 16:12:28 crc kubenswrapper[4714]: healthz check failed Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.759674 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lz6mw" podUID="7a1dfb55-8680-4cbe-bd78-caca2e847caf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.766669 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nssrv"] Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.817513 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.847624 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:28 crc kubenswrapper[4714]: E0129 16:12:28.847843 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:29.347811008 +0000 UTC m=+155.868312138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.848042 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.848156 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eae853ba-61c9-439b-9dc9-21567075f18a-utilities\") pod \"redhat-marketplace-nssrv\" (UID: \"eae853ba-61c9-439b-9dc9-21567075f18a\") " pod="openshift-marketplace/redhat-marketplace-nssrv" Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.848193 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eae853ba-61c9-439b-9dc9-21567075f18a-catalog-content\") pod \"redhat-marketplace-nssrv\" (UID: \"eae853ba-61c9-439b-9dc9-21567075f18a\") " pod="openshift-marketplace/redhat-marketplace-nssrv" Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.848269 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntklh\" (UniqueName: \"kubernetes.io/projected/eae853ba-61c9-439b-9dc9-21567075f18a-kube-api-access-ntklh\") pod \"redhat-marketplace-nssrv\" (UID: \"eae853ba-61c9-439b-9dc9-21567075f18a\") " pod="openshift-marketplace/redhat-marketplace-nssrv" Jan 29 16:12:28 crc kubenswrapper[4714]: E0129 16:12:28.848568 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:29.348557171 +0000 UTC m=+155.869058291 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.949192 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fee5819c-8349-4080-9922-453f31a300da-kubelet-dir\") pod \"fee5819c-8349-4080-9922-453f31a300da\" (UID: \"fee5819c-8349-4080-9922-453f31a300da\") " Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.949333 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fee5819c-8349-4080-9922-453f31a300da-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fee5819c-8349-4080-9922-453f31a300da" (UID: "fee5819c-8349-4080-9922-453f31a300da"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.949374 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.949416 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fee5819c-8349-4080-9922-453f31a300da-kube-api-access\") pod \"fee5819c-8349-4080-9922-453f31a300da\" (UID: \"fee5819c-8349-4080-9922-453f31a300da\") " Jan 29 16:12:28 crc kubenswrapper[4714]: E0129 16:12:28.949554 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:29.449521108 +0000 UTC m=+155.970022228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.949632 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.949694 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eae853ba-61c9-439b-9dc9-21567075f18a-utilities\") pod \"redhat-marketplace-nssrv\" (UID: \"eae853ba-61c9-439b-9dc9-21567075f18a\") " pod="openshift-marketplace/redhat-marketplace-nssrv" Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.949730 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eae853ba-61c9-439b-9dc9-21567075f18a-catalog-content\") pod \"redhat-marketplace-nssrv\" (UID: \"eae853ba-61c9-439b-9dc9-21567075f18a\") " pod="openshift-marketplace/redhat-marketplace-nssrv" Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.949779 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntklh\" (UniqueName: \"kubernetes.io/projected/eae853ba-61c9-439b-9dc9-21567075f18a-kube-api-access-ntklh\") pod \"redhat-marketplace-nssrv\" (UID: \"eae853ba-61c9-439b-9dc9-21567075f18a\") " pod="openshift-marketplace/redhat-marketplace-nssrv" Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.949839 4714 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fee5819c-8349-4080-9922-453f31a300da-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:28 crc kubenswrapper[4714]: E0129 16:12:28.949991 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:29.449982512 +0000 UTC m=+155.970483632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.950588 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eae853ba-61c9-439b-9dc9-21567075f18a-catalog-content\") pod \"redhat-marketplace-nssrv\" (UID: \"eae853ba-61c9-439b-9dc9-21567075f18a\") " pod="openshift-marketplace/redhat-marketplace-nssrv" Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.950811 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eae853ba-61c9-439b-9dc9-21567075f18a-utilities\") pod \"redhat-marketplace-nssrv\" (UID: \"eae853ba-61c9-439b-9dc9-21567075f18a\") " pod="openshift-marketplace/redhat-marketplace-nssrv" Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.954877 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fee5819c-8349-4080-9922-453f31a300da-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fee5819c-8349-4080-9922-453f31a300da" (UID: "fee5819c-8349-4080-9922-453f31a300da"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:28 crc kubenswrapper[4714]: I0129 16:12:28.966415 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntklh\" (UniqueName: \"kubernetes.io/projected/eae853ba-61c9-439b-9dc9-21567075f18a-kube-api-access-ntklh\") pod \"redhat-marketplace-nssrv\" (UID: \"eae853ba-61c9-439b-9dc9-21567075f18a\") " pod="openshift-marketplace/redhat-marketplace-nssrv" Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.051092 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:29 crc kubenswrapper[4714]: E0129 16:12:29.051242 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:29.551222008 +0000 UTC m=+156.071723138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.051425 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:29 crc kubenswrapper[4714]: E0129 16:12:29.051768 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:29.551758174 +0000 UTC m=+156.072259304 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.051966 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fee5819c-8349-4080-9922-453f31a300da-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.062891 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nssrv" Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.136294 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hbcnj"] Jan 29 16:12:29 crc kubenswrapper[4714]: E0129 16:12:29.136519 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee5819c-8349-4080-9922-453f31a300da" containerName="pruner" Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.136530 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee5819c-8349-4080-9922-453f31a300da" containerName="pruner" Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.136639 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="fee5819c-8349-4080-9922-453f31a300da" containerName="pruner" Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.137475 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hbcnj" Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.152868 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:29 crc kubenswrapper[4714]: E0129 16:12:29.153163 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:29.653136834 +0000 UTC m=+156.173637964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.153562 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.153635 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/213a402c-b327-4aa6-9690-22d6da8664a4-utilities\") pod \"redhat-marketplace-hbcnj\" (UID: \"213a402c-b327-4aa6-9690-22d6da8664a4\") " pod="openshift-marketplace/redhat-marketplace-hbcnj" Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.153682 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pkm4\" (UniqueName: \"kubernetes.io/projected/213a402c-b327-4aa6-9690-22d6da8664a4-kube-api-access-8pkm4\") pod \"redhat-marketplace-hbcnj\" (UID: \"213a402c-b327-4aa6-9690-22d6da8664a4\") " pod="openshift-marketplace/redhat-marketplace-hbcnj" Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.153708 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/213a402c-b327-4aa6-9690-22d6da8664a4-catalog-content\") pod \"redhat-marketplace-hbcnj\" (UID: \"213a402c-b327-4aa6-9690-22d6da8664a4\") " pod="openshift-marketplace/redhat-marketplace-hbcnj" Jan 29 16:12:29 crc kubenswrapper[4714]: E0129 16:12:29.154129 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:29.654107144 +0000 UTC m=+156.174608264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.169917 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hbcnj"] Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.255693 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:29 crc kubenswrapper[4714]: E0129 16:12:29.255874 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:29.755843424 +0000 UTC m=+156.276344544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.255991 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/213a402c-b327-4aa6-9690-22d6da8664a4-utilities\") pod \"redhat-marketplace-hbcnj\" (UID: \"213a402c-b327-4aa6-9690-22d6da8664a4\") " pod="openshift-marketplace/redhat-marketplace-hbcnj" Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.256044 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pkm4\" (UniqueName: \"kubernetes.io/projected/213a402c-b327-4aa6-9690-22d6da8664a4-kube-api-access-8pkm4\") pod \"redhat-marketplace-hbcnj\" (UID: \"213a402c-b327-4aa6-9690-22d6da8664a4\") " pod="openshift-marketplace/redhat-marketplace-hbcnj" Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.256072 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/213a402c-b327-4aa6-9690-22d6da8664a4-catalog-content\") pod \"redhat-marketplace-hbcnj\" (UID: \"213a402c-b327-4aa6-9690-22d6da8664a4\") " pod="openshift-marketplace/redhat-marketplace-hbcnj" Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.256125 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:29 crc kubenswrapper[4714]: E0129 16:12:29.256541 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:29.756525125 +0000 UTC m=+156.277026245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.256559 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/213a402c-b327-4aa6-9690-22d6da8664a4-utilities\") pod \"redhat-marketplace-hbcnj\" (UID: \"213a402c-b327-4aa6-9690-22d6da8664a4\") " pod="openshift-marketplace/redhat-marketplace-hbcnj" Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.256576 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/213a402c-b327-4aa6-9690-22d6da8664a4-catalog-content\") pod \"redhat-marketplace-hbcnj\" (UID: \"213a402c-b327-4aa6-9690-22d6da8664a4\") " pod="openshift-marketplace/redhat-marketplace-hbcnj" Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.273953 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pkm4\" (UniqueName: \"kubernetes.io/projected/213a402c-b327-4aa6-9690-22d6da8664a4-kube-api-access-8pkm4\") pod \"redhat-marketplace-hbcnj\" (UID: \"213a402c-b327-4aa6-9690-22d6da8664a4\") " pod="openshift-marketplace/redhat-marketplace-hbcnj" Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.290595 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nssrv"] Jan 29 16:12:29 crc kubenswrapper[4714]: W0129 16:12:29.303756 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeae853ba_61c9_439b_9dc9_21567075f18a.slice/crio-11eca2d99e975c8d4c6d498c418a6ed86174580092ad733d4cf31d057f9d974e WatchSource:0}: Error finding container 11eca2d99e975c8d4c6d498c418a6ed86174580092ad733d4cf31d057f9d974e: Status 404 returned error can't find the container with id 11eca2d99e975c8d4c6d498c418a6ed86174580092ad733d4cf31d057f9d974e Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.357989 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:29 crc kubenswrapper[4714]: E0129 16:12:29.358196 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:29.858171403 +0000 UTC m=+156.378672523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.358421 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:29 crc kubenswrapper[4714]: E0129 16:12:29.358757 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:29.858742981 +0000 UTC m=+156.379244101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.459037 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:29 crc kubenswrapper[4714]: E0129 16:12:29.459184 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:29.959167091 +0000 UTC m=+156.479668211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.459331 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:29 crc kubenswrapper[4714]: E0129 16:12:29.459580 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:29.959572134 +0000 UTC m=+156.480073254 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.463000 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hbcnj" Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.483384 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nssrv" event={"ID":"eae853ba-61c9-439b-9dc9-21567075f18a","Type":"ContainerStarted","Data":"11eca2d99e975c8d4c6d498c418a6ed86174580092ad733d4cf31d057f9d974e"} Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.484875 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fee5819c-8349-4080-9922-453f31a300da","Type":"ContainerDied","Data":"aabce909ed74e646558bee9fa02988139c96b4993b0d41ddfcd7351dba0ff624"} Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.484922 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aabce909ed74e646558bee9fa02988139c96b4993b0d41ddfcd7351dba0ff624" Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.484969 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.559977 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:29 crc kubenswrapper[4714]: E0129 16:12:29.560184 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:30.060152699 +0000 UTC m=+156.580653829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.560605 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:29 crc kubenswrapper[4714]: E0129 16:12:29.560978 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:30.060967304 +0000 UTC m=+156.581468444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.661500 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:29 crc kubenswrapper[4714]: E0129 16:12:29.661679 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:30.161651533 +0000 UTC m=+156.682152663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.661867 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:29 crc kubenswrapper[4714]: E0129 16:12:29.662233 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:30.16221971 +0000 UTC m=+156.682720830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.671079 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hbcnj"] Jan 29 16:12:29 crc kubenswrapper[4714]: W0129 16:12:29.679455 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod213a402c_b327_4aa6_9690_22d6da8664a4.slice/crio-cc206117744230d0505236f4ba4b88035d78daaa5da318d54c519b3dd8b10d4e WatchSource:0}: Error finding container cc206117744230d0505236f4ba4b88035d78daaa5da318d54c519b3dd8b10d4e: Status 404 returned error can't find the container with id cc206117744230d0505236f4ba4b88035d78daaa5da318d54c519b3dd8b10d4e Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.743677 4714 patch_prober.go:28] interesting pod/router-default-5444994796-lz6mw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:12:29 crc kubenswrapper[4714]: [-]has-synced failed: reason withheld Jan 29 16:12:29 crc kubenswrapper[4714]: [+]process-running ok Jan 29 16:12:29 crc kubenswrapper[4714]: healthz check failed Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.743744 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lz6mw" podUID="7a1dfb55-8680-4cbe-bd78-caca2e847caf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.762748 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:29 crc kubenswrapper[4714]: E0129 16:12:29.762915 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:30.262894539 +0000 UTC m=+156.783395669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.763272 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:29 crc kubenswrapper[4714]: E0129 16:12:29.763640 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:30.263628461 +0000 UTC m=+156.784129591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.864769 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:29 crc kubenswrapper[4714]: E0129 16:12:29.864915 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:30.364892736 +0000 UTC m=+156.885393856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.865258 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:29 crc kubenswrapper[4714]: E0129 16:12:29.865560 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:30.365553007 +0000 UTC m=+156.886054127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.938102 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lb68h"] Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.945379 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lb68h" Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.951193 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.963899 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lb68h"] Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.973970 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.974230 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d05e7c79-7d66-4453-aedb-f240784ff294-utilities\") pod \"redhat-operators-lb68h\" (UID: \"d05e7c79-7d66-4453-aedb-f240784ff294\") " pod="openshift-marketplace/redhat-operators-lb68h" Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.974276 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4jjj\" (UniqueName: \"kubernetes.io/projected/d05e7c79-7d66-4453-aedb-f240784ff294-kube-api-access-m4jjj\") pod \"redhat-operators-lb68h\" (UID: \"d05e7c79-7d66-4453-aedb-f240784ff294\") " pod="openshift-marketplace/redhat-operators-lb68h" Jan 29 16:12:29 crc kubenswrapper[4714]: I0129 16:12:29.974297 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d05e7c79-7d66-4453-aedb-f240784ff294-catalog-content\") pod \"redhat-operators-lb68h\" (UID: \"d05e7c79-7d66-4453-aedb-f240784ff294\") " pod="openshift-marketplace/redhat-operators-lb68h" Jan 29 16:12:29 crc kubenswrapper[4714]: E0129 16:12:29.974439 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:30.474424066 +0000 UTC m=+156.994925186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.076512 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.076603 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d05e7c79-7d66-4453-aedb-f240784ff294-utilities\") pod \"redhat-operators-lb68h\" (UID: \"d05e7c79-7d66-4453-aedb-f240784ff294\") " pod="openshift-marketplace/redhat-operators-lb68h" Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.076692 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4jjj\" (UniqueName: \"kubernetes.io/projected/d05e7c79-7d66-4453-aedb-f240784ff294-kube-api-access-m4jjj\") pod \"redhat-operators-lb68h\" (UID: \"d05e7c79-7d66-4453-aedb-f240784ff294\") " pod="openshift-marketplace/redhat-operators-lb68h" Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.076730 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d05e7c79-7d66-4453-aedb-f240784ff294-catalog-content\") pod \"redhat-operators-lb68h\" (UID: \"d05e7c79-7d66-4453-aedb-f240784ff294\") " pod="openshift-marketplace/redhat-operators-lb68h" Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.077283 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d05e7c79-7d66-4453-aedb-f240784ff294-catalog-content\") pod \"redhat-operators-lb68h\" (UID: \"d05e7c79-7d66-4453-aedb-f240784ff294\") " pod="openshift-marketplace/redhat-operators-lb68h" Jan 29 16:12:30 crc kubenswrapper[4714]: E0129 16:12:30.077368 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:30.577348913 +0000 UTC m=+157.097850043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.077639 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d05e7c79-7d66-4453-aedb-f240784ff294-utilities\") pod \"redhat-operators-lb68h\" (UID: \"d05e7c79-7d66-4453-aedb-f240784ff294\") " pod="openshift-marketplace/redhat-operators-lb68h" Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.096443 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4jjj\" (UniqueName: \"kubernetes.io/projected/d05e7c79-7d66-4453-aedb-f240784ff294-kube-api-access-m4jjj\") pod \"redhat-operators-lb68h\" (UID: \"d05e7c79-7d66-4453-aedb-f240784ff294\") " pod="openshift-marketplace/redhat-operators-lb68h" Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.177722 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:30 crc kubenswrapper[4714]: E0129 16:12:30.177898 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:30.677870236 +0000 UTC m=+157.198371366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.178189 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:30 crc kubenswrapper[4714]: E0129 16:12:30.178582 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:30.678567398 +0000 UTC m=+157.199068518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.262510 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lb68h" Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.279641 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:30 crc kubenswrapper[4714]: E0129 16:12:30.279833 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:30.779794223 +0000 UTC m=+157.300295353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.280281 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:30 crc kubenswrapper[4714]: E0129 16:12:30.280681 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:30.78067068 +0000 UTC m=+157.301171810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.349067 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kg9qt"] Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.359220 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kg9qt" Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.362016 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kg9qt"] Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.382587 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:30 crc kubenswrapper[4714]: E0129 16:12:30.382836 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:30.882808593 +0000 UTC m=+157.403309723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.383207 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpzw5\" (UniqueName: \"kubernetes.io/projected/ec0eba7e-2ea0-432f-bc57-d87404801abe-kube-api-access-xpzw5\") pod \"redhat-operators-kg9qt\" (UID: \"ec0eba7e-2ea0-432f-bc57-d87404801abe\") " pod="openshift-marketplace/redhat-operators-kg9qt" Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.383254 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec0eba7e-2ea0-432f-bc57-d87404801abe-utilities\") pod \"redhat-operators-kg9qt\" (UID: \"ec0eba7e-2ea0-432f-bc57-d87404801abe\") " pod="openshift-marketplace/redhat-operators-kg9qt" Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.383294 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.383326 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec0eba7e-2ea0-432f-bc57-d87404801abe-catalog-content\") pod \"redhat-operators-kg9qt\" (UID: \"ec0eba7e-2ea0-432f-bc57-d87404801abe\") " pod="openshift-marketplace/redhat-operators-kg9qt" Jan 29 16:12:30 crc kubenswrapper[4714]: E0129 16:12:30.383662 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:30.883648118 +0000 UTC m=+157.404149238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.478970 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lb68h"] Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.484550 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:30 crc kubenswrapper[4714]: E0129 16:12:30.484731 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:30.984685718 +0000 UTC m=+157.505186838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.484806 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec0eba7e-2ea0-432f-bc57-d87404801abe-catalog-content\") pod \"redhat-operators-kg9qt\" (UID: \"ec0eba7e-2ea0-432f-bc57-d87404801abe\") " pod="openshift-marketplace/redhat-operators-kg9qt" Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.484928 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpzw5\" (UniqueName: \"kubernetes.io/projected/ec0eba7e-2ea0-432f-bc57-d87404801abe-kube-api-access-xpzw5\") pod \"redhat-operators-kg9qt\" (UID: \"ec0eba7e-2ea0-432f-bc57-d87404801abe\") " pod="openshift-marketplace/redhat-operators-kg9qt" Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.484979 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec0eba7e-2ea0-432f-bc57-d87404801abe-utilities\") pod \"redhat-operators-kg9qt\" (UID: \"ec0eba7e-2ea0-432f-bc57-d87404801abe\") " pod="openshift-marketplace/redhat-operators-kg9qt" Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.485018 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:30 crc kubenswrapper[4714]: E0129 16:12:30.485335 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:30.985320927 +0000 UTC m=+157.505822047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.485846 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec0eba7e-2ea0-432f-bc57-d87404801abe-catalog-content\") pod \"redhat-operators-kg9qt\" (UID: \"ec0eba7e-2ea0-432f-bc57-d87404801abe\") " pod="openshift-marketplace/redhat-operators-kg9qt" Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.486443 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec0eba7e-2ea0-432f-bc57-d87404801abe-utilities\") pod \"redhat-operators-kg9qt\" (UID: \"ec0eba7e-2ea0-432f-bc57-d87404801abe\") " pod="openshift-marketplace/redhat-operators-kg9qt" Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.500722 4714 generic.go:334] "Generic (PLEG): container finished" podID="213a402c-b327-4aa6-9690-22d6da8664a4" containerID="53e99ef17ef7a89695643694f24aab5f9e6445925a8b86fd6d021c08cb6c082f" exitCode=0 Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.500842 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hbcnj" event={"ID":"213a402c-b327-4aa6-9690-22d6da8664a4","Type":"ContainerDied","Data":"53e99ef17ef7a89695643694f24aab5f9e6445925a8b86fd6d021c08cb6c082f"} Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.501054 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hbcnj" event={"ID":"213a402c-b327-4aa6-9690-22d6da8664a4","Type":"ContainerStarted","Data":"cc206117744230d0505236f4ba4b88035d78daaa5da318d54c519b3dd8b10d4e"} Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.502674 4714 generic.go:334] "Generic (PLEG): container finished" podID="98a35d03-ef3b-4341-9866-56d12a28aee3" containerID="7d5da21ddc846f7484a829e4e1b7d4f27ddad6196e5ccbce162fd0a2651869dd" exitCode=0 Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.502751 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6bjgq" event={"ID":"98a35d03-ef3b-4341-9866-56d12a28aee3","Type":"ContainerDied","Data":"7d5da21ddc846f7484a829e4e1b7d4f27ddad6196e5ccbce162fd0a2651869dd"} Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.505235 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpzw5\" (UniqueName: \"kubernetes.io/projected/ec0eba7e-2ea0-432f-bc57-d87404801abe-kube-api-access-xpzw5\") pod \"redhat-operators-kg9qt\" (UID: \"ec0eba7e-2ea0-432f-bc57-d87404801abe\") " pod="openshift-marketplace/redhat-operators-kg9qt" Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.508234 4714 generic.go:334] "Generic (PLEG): container finished" podID="a97ed1ff-657f-4bde-943b-78caf9d07f92" containerID="bebaa193fd909649d996ac5ecb12f75e7aa251dd1a4b1b911882734c87e4b046" exitCode=0 Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.508295 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74twj" event={"ID":"a97ed1ff-657f-4bde-943b-78caf9d07f92","Type":"ContainerDied","Data":"bebaa193fd909649d996ac5ecb12f75e7aa251dd1a4b1b911882734c87e4b046"} Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.509730 4714 generic.go:334] "Generic (PLEG): container finished" podID="ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef" containerID="bf193fd80bff7bed1bec1edfb59432d0f18ec27217fda44032cd6a47058aee41" exitCode=0 Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.509813 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbvgp" event={"ID":"ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef","Type":"ContainerDied","Data":"bf193fd80bff7bed1bec1edfb59432d0f18ec27217fda44032cd6a47058aee41"} Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.510441 4714 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.514471 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb68h" event={"ID":"d05e7c79-7d66-4453-aedb-f240784ff294","Type":"ContainerStarted","Data":"5e22f2e727671a2879c86dcb9146aebbe76ddedf77fd5e705c834b21cf8bd941"} Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.521966 4714 generic.go:334] "Generic (PLEG): container finished" podID="eae853ba-61c9-439b-9dc9-21567075f18a" containerID="4ddfcc2a03d228c12293d91791425de7e03e1ce7bf286f993aef049f853f58fe" exitCode=0 Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.522109 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nssrv" event={"ID":"eae853ba-61c9-439b-9dc9-21567075f18a","Type":"ContainerDied","Data":"4ddfcc2a03d228c12293d91791425de7e03e1ce7bf286f993aef049f853f58fe"} Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.526192 4714 generic.go:334] "Generic (PLEG): container finished" podID="11a30de8-b234-47b4-8fd0-44f0c428be78" containerID="70b956b273a4676b7f7a5e461f43c09f1067f834a7d754d57cae60e15152821d" exitCode=0 Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.526279 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xtr82" event={"ID":"11a30de8-b234-47b4-8fd0-44f0c428be78","Type":"ContainerDied","Data":"70b956b273a4676b7f7a5e461f43c09f1067f834a7d754d57cae60e15152821d"} Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.531506 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58","Type":"ContainerStarted","Data":"25d6585e3011dd0c35de85182524e231475d07c340c6eadfb866c1349154b360"} Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.578420 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.578398993 podStartE2EDuration="3.578398993s" podCreationTimestamp="2026-01-29 16:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:30.576825395 +0000 UTC m=+157.097326535" watchObservedRunningTime="2026-01-29 16:12:30.578398993 +0000 UTC m=+157.098900113" Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.586991 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:30 crc kubenswrapper[4714]: E0129 16:12:30.587190 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:31.087153121 +0000 UTC m=+157.607654251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.587455 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:30 crc kubenswrapper[4714]: E0129 16:12:30.590925 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:31.090905946 +0000 UTC m=+157.611407076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:30 crc kubenswrapper[4714]: E0129 16:12:30.688823 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:31.188784839 +0000 UTC m=+157.709285999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.688615 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.689639 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:30 crc kubenswrapper[4714]: E0129 16:12:30.690601 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:31.190579314 +0000 UTC m=+157.711080474 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.691390 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.699045 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kgl5s" Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.705378 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kg9qt" Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.746623 4714 patch_prober.go:28] interesting pod/router-default-5444994796-lz6mw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:12:30 crc kubenswrapper[4714]: [-]has-synced failed: reason withheld Jan 29 16:12:30 crc kubenswrapper[4714]: [+]process-running ok Jan 29 16:12:30 crc kubenswrapper[4714]: healthz check failed Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.746692 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lz6mw" podUID="7a1dfb55-8680-4cbe-bd78-caca2e847caf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.790961 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:30 crc kubenswrapper[4714]: E0129 16:12:30.792310 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:31.292290174 +0000 UTC m=+157.812791294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.893408 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:30 crc kubenswrapper[4714]: E0129 16:12:30.893871 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:31.393849629 +0000 UTC m=+157.914350829 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.994812 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:30 crc kubenswrapper[4714]: E0129 16:12:30.995096 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:31.495071954 +0000 UTC m=+158.015573074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:30 crc kubenswrapper[4714]: I0129 16:12:30.995583 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:30 crc kubenswrapper[4714]: E0129 16:12:30.996002 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:31.495986692 +0000 UTC m=+158.016487812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.016333 4714 patch_prober.go:28] interesting pod/apiserver-76f77b778f-6jl75 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 29 16:12:31 crc kubenswrapper[4714]: [+]log ok Jan 29 16:12:31 crc kubenswrapper[4714]: [+]etcd ok Jan 29 16:12:31 crc kubenswrapper[4714]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 29 16:12:31 crc kubenswrapper[4714]: [+]poststarthook/generic-apiserver-start-informers ok Jan 29 16:12:31 crc kubenswrapper[4714]: [+]poststarthook/max-in-flight-filter ok Jan 29 16:12:31 crc kubenswrapper[4714]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 29 16:12:31 crc kubenswrapper[4714]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 29 16:12:31 crc kubenswrapper[4714]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 29 16:12:31 crc kubenswrapper[4714]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 29 16:12:31 crc kubenswrapper[4714]: [+]poststarthook/project.openshift.io-projectcache ok Jan 29 16:12:31 crc kubenswrapper[4714]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 29 16:12:31 crc kubenswrapper[4714]: [+]poststarthook/openshift.io-startinformers ok Jan 29 16:12:31 crc kubenswrapper[4714]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 29 16:12:31 crc kubenswrapper[4714]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 29 16:12:31 crc kubenswrapper[4714]: livez check failed Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.016400 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-6jl75" podUID="99bab267-639b-48b1-abc4-8c0373200a39" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.019700 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kg9qt"] Jan 29 16:12:31 crc kubenswrapper[4714]: W0129 16:12:31.033494 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec0eba7e_2ea0_432f_bc57_d87404801abe.slice/crio-0b72081696a87dc8bb1d6a9fb5d18f85ed353808c0ac8086b9bf62a3458b3736 WatchSource:0}: Error finding container 0b72081696a87dc8bb1d6a9fb5d18f85ed353808c0ac8086b9bf62a3458b3736: Status 404 returned error can't find the container with id 0b72081696a87dc8bb1d6a9fb5d18f85ed353808c0ac8086b9bf62a3458b3736 Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.097273 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:31 crc kubenswrapper[4714]: E0129 16:12:31.097672 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:31.597653591 +0000 UTC m=+158.118154711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.200006 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:31 crc kubenswrapper[4714]: E0129 16:12:31.200479 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:31.700459364 +0000 UTC m=+158.220960554 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.301028 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:31 crc kubenswrapper[4714]: E0129 16:12:31.301268 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:31.801233265 +0000 UTC m=+158.321734385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.301349 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:31 crc kubenswrapper[4714]: E0129 16:12:31.301886 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:31.801874985 +0000 UTC m=+158.322376105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.403634 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:31 crc kubenswrapper[4714]: E0129 16:12:31.403830 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:31.903801612 +0000 UTC m=+158.424302742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.403902 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:31 crc kubenswrapper[4714]: E0129 16:12:31.404237 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:31.904227355 +0000 UTC m=+158.424728485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.505311 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:31 crc kubenswrapper[4714]: E0129 16:12:31.505532 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:32.005486051 +0000 UTC m=+158.525987171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.505952 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:31 crc kubenswrapper[4714]: E0129 16:12:31.506261 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:32.006253584 +0000 UTC m=+158.526754704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.537737 4714 generic.go:334] "Generic (PLEG): container finished" podID="01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58" containerID="25d6585e3011dd0c35de85182524e231475d07c340c6eadfb866c1349154b360" exitCode=0 Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.537840 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58","Type":"ContainerDied","Data":"25d6585e3011dd0c35de85182524e231475d07c340c6eadfb866c1349154b360"} Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.539044 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kg9qt" event={"ID":"ec0eba7e-2ea0-432f-bc57-d87404801abe","Type":"ContainerStarted","Data":"0b72081696a87dc8bb1d6a9fb5d18f85ed353808c0ac8086b9bf62a3458b3736"} Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.542129 4714 generic.go:334] "Generic (PLEG): container finished" podID="d05e7c79-7d66-4453-aedb-f240784ff294" containerID="0e066cf92f43693eba9898b6ef36d8d3eb21b4fa7c877b99db7e3e39dddda134" exitCode=0 Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.542179 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb68h" event={"ID":"d05e7c79-7d66-4453-aedb-f240784ff294","Type":"ContainerDied","Data":"0e066cf92f43693eba9898b6ef36d8d3eb21b4fa7c877b99db7e3e39dddda134"} Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.545310 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4nghl" event={"ID":"714cef39-2960-4a25-ac81-a4e65a115eb3","Type":"ContainerStarted","Data":"c06e7561b8f72b2f2fba280eae8d647a4fbaf5eabdf2bc315957b0e8c936e176"} Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.607608 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:31 crc kubenswrapper[4714]: E0129 16:12:31.607781 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:32.107750568 +0000 UTC m=+158.628251688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.608274 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:31 crc kubenswrapper[4714]: E0129 16:12:31.610009 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:32.109993686 +0000 UTC m=+158.630494806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.709858 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:31 crc kubenswrapper[4714]: E0129 16:12:31.710004 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:32.209975734 +0000 UTC m=+158.730476864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.710138 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:31 crc kubenswrapper[4714]: E0129 16:12:31.710435 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:12:32.210425937 +0000 UTC m=+158.730927147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gnjmm" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.729322 4714 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.740198 4714 patch_prober.go:28] interesting pod/router-default-5444994796-lz6mw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:12:31 crc kubenswrapper[4714]: [-]has-synced failed: reason withheld Jan 29 16:12:31 crc kubenswrapper[4714]: [+]process-running ok Jan 29 16:12:31 crc kubenswrapper[4714]: healthz check failed Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.740258 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lz6mw" podUID="7a1dfb55-8680-4cbe-bd78-caca2e847caf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.811463 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:31 crc kubenswrapper[4714]: E0129 16:12:31.812004 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:32.311985353 +0000 UTC m=+158.832486473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.899950 4714 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-29T16:12:31.729348236Z","Handler":null,"Name":""} Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.902302 4714 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.902331 4714 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.913484 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.918543 4714 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 16:12:31 crc kubenswrapper[4714]: I0129 16:12:31.918604 4714 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:32 crc kubenswrapper[4714]: I0129 16:12:32.000537 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gnjmm\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:32 crc kubenswrapper[4714]: I0129 16:12:32.014609 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:32 crc kubenswrapper[4714]: I0129 16:12:32.021613 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:32 crc kubenswrapper[4714]: I0129 16:12:32.084037 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 16:12:32 crc kubenswrapper[4714]: I0129 16:12:32.199338 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 29 16:12:32 crc kubenswrapper[4714]: I0129 16:12:32.200548 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-jcdhl" Jan 29 16:12:32 crc kubenswrapper[4714]: I0129 16:12:32.261130 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gnjmm"] Jan 29 16:12:32 crc kubenswrapper[4714]: I0129 16:12:32.525396 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:32 crc kubenswrapper[4714]: I0129 16:12:32.550474 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" event={"ID":"48be8ad8-4c02-4bea-a143-449763b39d54","Type":"ContainerStarted","Data":"815b16152db25222f3f6a5ff40233d8cdbe464e73d20d130a327746193531954"} Jan 29 16:12:32 crc kubenswrapper[4714]: I0129 16:12:32.551679 4714 generic.go:334] "Generic (PLEG): container finished" podID="ec0eba7e-2ea0-432f-bc57-d87404801abe" containerID="52ca09e52af15343909984359236a933ffe1f46ab7ada48929cc2741cd10782f" exitCode=0 Jan 29 16:12:32 crc kubenswrapper[4714]: I0129 16:12:32.551752 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kg9qt" event={"ID":"ec0eba7e-2ea0-432f-bc57-d87404801abe","Type":"ContainerDied","Data":"52ca09e52af15343909984359236a933ffe1f46ab7ada48929cc2741cd10782f"} Jan 29 16:12:32 crc kubenswrapper[4714]: I0129 16:12:32.743362 4714 patch_prober.go:28] interesting pod/router-default-5444994796-lz6mw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:12:32 crc kubenswrapper[4714]: [-]has-synced failed: reason withheld Jan 29 16:12:32 crc kubenswrapper[4714]: [+]process-running ok Jan 29 16:12:32 crc kubenswrapper[4714]: healthz check failed Jan 29 16:12:32 crc kubenswrapper[4714]: I0129 16:12:32.743666 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lz6mw" podUID="7a1dfb55-8680-4cbe-bd78-caca2e847caf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:12:32 crc kubenswrapper[4714]: I0129 16:12:32.765552 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:12:32 crc kubenswrapper[4714]: I0129 16:12:32.825176 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58-kube-api-access\") pod \"01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58\" (UID: \"01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58\") " Jan 29 16:12:32 crc kubenswrapper[4714]: I0129 16:12:32.825221 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58-kubelet-dir\") pod \"01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58\" (UID: \"01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58\") " Jan 29 16:12:32 crc kubenswrapper[4714]: I0129 16:12:32.825505 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58" (UID: "01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:12:32 crc kubenswrapper[4714]: I0129 16:12:32.833203 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58" (UID: "01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:32 crc kubenswrapper[4714]: I0129 16:12:32.927407 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:32 crc kubenswrapper[4714]: I0129 16:12:32.927462 4714 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:33 crc kubenswrapper[4714]: I0129 16:12:33.561286 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58","Type":"ContainerDied","Data":"787dde46e1e979113da67d8494483beb2242c954a937435fdd6e95f1ea828572"} Jan 29 16:12:33 crc kubenswrapper[4714]: I0129 16:12:33.561331 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="787dde46e1e979113da67d8494483beb2242c954a937435fdd6e95f1ea828572" Jan 29 16:12:33 crc kubenswrapper[4714]: I0129 16:12:33.561414 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:12:33 crc kubenswrapper[4714]: I0129 16:12:33.566871 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4nghl" event={"ID":"714cef39-2960-4a25-ac81-a4e65a115eb3","Type":"ContainerStarted","Data":"6c7aed45b1136fa062f1742097a1e69e9fdfa62a7f2bceaffd7f34de9a0f9750"} Jan 29 16:12:33 crc kubenswrapper[4714]: I0129 16:12:33.741974 4714 patch_prober.go:28] interesting pod/router-default-5444994796-lz6mw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:12:33 crc kubenswrapper[4714]: [-]has-synced failed: reason withheld Jan 29 16:12:33 crc kubenswrapper[4714]: [+]process-running ok Jan 29 16:12:33 crc kubenswrapper[4714]: healthz check failed Jan 29 16:12:33 crc kubenswrapper[4714]: I0129 16:12:33.742200 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lz6mw" podUID="7a1dfb55-8680-4cbe-bd78-caca2e847caf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:12:34 crc kubenswrapper[4714]: I0129 16:12:34.575956 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" event={"ID":"48be8ad8-4c02-4bea-a143-449763b39d54","Type":"ContainerStarted","Data":"f2810d7fe0cd9711b0f9c4dd754cc2bb22760a85d91d92553df7afe7b8378eac"} Jan 29 16:12:34 crc kubenswrapper[4714]: I0129 16:12:34.576456 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:34 crc kubenswrapper[4714]: I0129 16:12:34.604987 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" podStartSLOduration=140.604964143 podStartE2EDuration="2m20.604964143s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:34.601585699 +0000 UTC m=+161.122086829" watchObservedRunningTime="2026-01-29 16:12:34.604964143 +0000 UTC m=+161.125465273" Jan 29 16:12:34 crc kubenswrapper[4714]: I0129 16:12:34.741171 4714 patch_prober.go:28] interesting pod/router-default-5444994796-lz6mw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:12:34 crc kubenswrapper[4714]: [-]has-synced failed: reason withheld Jan 29 16:12:34 crc kubenswrapper[4714]: [+]process-running ok Jan 29 16:12:34 crc kubenswrapper[4714]: healthz check failed Jan 29 16:12:34 crc kubenswrapper[4714]: I0129 16:12:34.741522 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lz6mw" podUID="7a1dfb55-8680-4cbe-bd78-caca2e847caf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:12:35 crc kubenswrapper[4714]: I0129 16:12:35.586719 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4nghl" event={"ID":"714cef39-2960-4a25-ac81-a4e65a115eb3","Type":"ContainerStarted","Data":"b95620bf86f7dddda0d975ceb522dc33f20f0909fecf473ff8d60f7f216dd939"} Jan 29 16:12:35 crc kubenswrapper[4714]: I0129 16:12:35.614873 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-4nghl" podStartSLOduration=22.614847912 podStartE2EDuration="22.614847912s" podCreationTimestamp="2026-01-29 16:12:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:12:35.612800819 +0000 UTC m=+162.133301949" watchObservedRunningTime="2026-01-29 16:12:35.614847912 +0000 UTC m=+162.135349042" Jan 29 16:12:35 crc kubenswrapper[4714]: I0129 16:12:35.717381 4714 patch_prober.go:28] interesting pod/console-f9d7485db-m2g9h container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 29 16:12:35 crc kubenswrapper[4714]: I0129 16:12:35.717432 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-m2g9h" podUID="0e2a789d-6a90-4d60-881e-9562cd92e0a7" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 29 16:12:35 crc kubenswrapper[4714]: I0129 16:12:35.740410 4714 patch_prober.go:28] interesting pod/router-default-5444994796-lz6mw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:12:35 crc kubenswrapper[4714]: [-]has-synced failed: reason withheld Jan 29 16:12:35 crc kubenswrapper[4714]: [+]process-running ok Jan 29 16:12:35 crc kubenswrapper[4714]: healthz check failed Jan 29 16:12:35 crc kubenswrapper[4714]: I0129 16:12:35.740471 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lz6mw" podUID="7a1dfb55-8680-4cbe-bd78-caca2e847caf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:12:35 crc kubenswrapper[4714]: I0129 16:12:35.796090 4714 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn75b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 29 16:12:35 crc kubenswrapper[4714]: I0129 16:12:35.796158 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fn75b" podUID="42b66dc3-a385-4350-a943-50f062da35f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 29 16:12:35 crc kubenswrapper[4714]: I0129 16:12:35.797128 4714 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn75b container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 29 16:12:35 crc kubenswrapper[4714]: I0129 16:12:35.797272 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fn75b" podUID="42b66dc3-a385-4350-a943-50f062da35f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 29 16:12:35 crc kubenswrapper[4714]: I0129 16:12:35.998062 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:36 crc kubenswrapper[4714]: I0129 16:12:36.003787 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-6jl75" Jan 29 16:12:36 crc kubenswrapper[4714]: I0129 16:12:36.740103 4714 patch_prober.go:28] interesting pod/router-default-5444994796-lz6mw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:12:36 crc kubenswrapper[4714]: [-]has-synced failed: reason withheld Jan 29 16:12:36 crc kubenswrapper[4714]: [+]process-running ok Jan 29 16:12:36 crc kubenswrapper[4714]: healthz check failed Jan 29 16:12:36 crc kubenswrapper[4714]: I0129 16:12:36.740479 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lz6mw" podUID="7a1dfb55-8680-4cbe-bd78-caca2e847caf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:12:37 crc kubenswrapper[4714]: I0129 16:12:37.739857 4714 patch_prober.go:28] interesting pod/router-default-5444994796-lz6mw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:12:37 crc kubenswrapper[4714]: [-]has-synced failed: reason withheld Jan 29 16:12:37 crc kubenswrapper[4714]: [+]process-running ok Jan 29 16:12:37 crc kubenswrapper[4714]: healthz check failed Jan 29 16:12:37 crc kubenswrapper[4714]: I0129 16:12:37.739952 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lz6mw" podUID="7a1dfb55-8680-4cbe-bd78-caca2e847caf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:12:38 crc kubenswrapper[4714]: I0129 16:12:38.103825 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/791456e8-8d95-4cdb-8fd1-d06a7586b328-metrics-certs\") pod \"network-metrics-daemon-2w92b\" (UID: \"791456e8-8d95-4cdb-8fd1-d06a7586b328\") " pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:12:38 crc kubenswrapper[4714]: I0129 16:12:38.116560 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/791456e8-8d95-4cdb-8fd1-d06a7586b328-metrics-certs\") pod \"network-metrics-daemon-2w92b\" (UID: \"791456e8-8d95-4cdb-8fd1-d06a7586b328\") " pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:12:38 crc kubenswrapper[4714]: I0129 16:12:38.407222 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2w92b" Jan 29 16:12:38 crc kubenswrapper[4714]: I0129 16:12:38.719404 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2w92b"] Jan 29 16:12:38 crc kubenswrapper[4714]: I0129 16:12:38.740591 4714 patch_prober.go:28] interesting pod/router-default-5444994796-lz6mw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:12:38 crc kubenswrapper[4714]: [-]has-synced failed: reason withheld Jan 29 16:12:38 crc kubenswrapper[4714]: [+]process-running ok Jan 29 16:12:38 crc kubenswrapper[4714]: healthz check failed Jan 29 16:12:38 crc kubenswrapper[4714]: I0129 16:12:38.740649 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lz6mw" podUID="7a1dfb55-8680-4cbe-bd78-caca2e847caf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:12:39 crc kubenswrapper[4714]: I0129 16:12:39.622647 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2w92b" event={"ID":"791456e8-8d95-4cdb-8fd1-d06a7586b328","Type":"ContainerStarted","Data":"6555557490d8db02ec80c5595e01191e525ec03f524bc437c5c49a933664722a"} Jan 29 16:12:39 crc kubenswrapper[4714]: I0129 16:12:39.739730 4714 patch_prober.go:28] interesting pod/router-default-5444994796-lz6mw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:12:39 crc kubenswrapper[4714]: [-]has-synced failed: reason withheld Jan 29 16:12:39 crc kubenswrapper[4714]: [+]process-running ok Jan 29 16:12:39 crc kubenswrapper[4714]: healthz check failed Jan 29 16:12:39 crc kubenswrapper[4714]: I0129 16:12:39.739780 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lz6mw" podUID="7a1dfb55-8680-4cbe-bd78-caca2e847caf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:12:40 crc kubenswrapper[4714]: I0129 16:12:40.628079 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2w92b" event={"ID":"791456e8-8d95-4cdb-8fd1-d06a7586b328","Type":"ContainerStarted","Data":"5aab667586997dedef5a158b01344ff3e116d1ce4032091162e3572b9f0a1729"} Jan 29 16:12:40 crc kubenswrapper[4714]: I0129 16:12:40.739823 4714 patch_prober.go:28] interesting pod/router-default-5444994796-lz6mw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:12:40 crc kubenswrapper[4714]: [-]has-synced failed: reason withheld Jan 29 16:12:40 crc kubenswrapper[4714]: [+]process-running ok Jan 29 16:12:40 crc kubenswrapper[4714]: healthz check failed Jan 29 16:12:40 crc kubenswrapper[4714]: I0129 16:12:40.739893 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lz6mw" podUID="7a1dfb55-8680-4cbe-bd78-caca2e847caf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:12:41 crc kubenswrapper[4714]: I0129 16:12:41.739502 4714 patch_prober.go:28] interesting pod/router-default-5444994796-lz6mw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:12:41 crc kubenswrapper[4714]: [-]has-synced failed: reason withheld Jan 29 16:12:41 crc kubenswrapper[4714]: [+]process-running ok Jan 29 16:12:41 crc kubenswrapper[4714]: healthz check failed Jan 29 16:12:41 crc kubenswrapper[4714]: I0129 16:12:41.739794 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lz6mw" podUID="7a1dfb55-8680-4cbe-bd78-caca2e847caf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:12:42 crc kubenswrapper[4714]: I0129 16:12:42.740870 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-lz6mw" Jan 29 16:12:42 crc kubenswrapper[4714]: I0129 16:12:42.743757 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-lz6mw" Jan 29 16:12:45 crc kubenswrapper[4714]: I0129 16:12:45.717265 4714 patch_prober.go:28] interesting pod/console-f9d7485db-m2g9h container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 29 16:12:45 crc kubenswrapper[4714]: I0129 16:12:45.717607 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-m2g9h" podUID="0e2a789d-6a90-4d60-881e-9562cd92e0a7" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 29 16:12:45 crc kubenswrapper[4714]: I0129 16:12:45.796025 4714 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn75b container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 29 16:12:45 crc kubenswrapper[4714]: I0129 16:12:45.796079 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fn75b" podUID="42b66dc3-a385-4350-a943-50f062da35f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 29 16:12:45 crc kubenswrapper[4714]: I0129 16:12:45.796118 4714 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-fn75b" Jan 29 16:12:45 crc kubenswrapper[4714]: I0129 16:12:45.796288 4714 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn75b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 29 16:12:45 crc kubenswrapper[4714]: I0129 16:12:45.796340 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fn75b" podUID="42b66dc3-a385-4350-a943-50f062da35f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 29 16:12:45 crc kubenswrapper[4714]: I0129 16:12:45.796682 4714 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn75b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 29 16:12:45 crc kubenswrapper[4714]: I0129 16:12:45.796724 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fn75b" podUID="42b66dc3-a385-4350-a943-50f062da35f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 29 16:12:45 crc kubenswrapper[4714]: I0129 16:12:45.796763 4714 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"c4aaec06be7df88764d0dc745049e2e561eb871b6ccb463e86a9ef317a262a34"} pod="openshift-console/downloads-7954f5f757-fn75b" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 29 16:12:45 crc kubenswrapper[4714]: I0129 16:12:45.797739 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-fn75b" podUID="42b66dc3-a385-4350-a943-50f062da35f7" containerName="download-server" containerID="cri-o://c4aaec06be7df88764d0dc745049e2e561eb871b6ccb463e86a9ef317a262a34" gracePeriod=2 Jan 29 16:12:48 crc kubenswrapper[4714]: I0129 16:12:48.680996 4714 generic.go:334] "Generic (PLEG): container finished" podID="42b66dc3-a385-4350-a943-50f062da35f7" containerID="c4aaec06be7df88764d0dc745049e2e561eb871b6ccb463e86a9ef317a262a34" exitCode=0 Jan 29 16:12:48 crc kubenswrapper[4714]: I0129 16:12:48.681089 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fn75b" event={"ID":"42b66dc3-a385-4350-a943-50f062da35f7","Type":"ContainerDied","Data":"c4aaec06be7df88764d0dc745049e2e561eb871b6ccb463e86a9ef317a262a34"} Jan 29 16:12:52 crc kubenswrapper[4714]: I0129 16:12:52.030735 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:12:55 crc kubenswrapper[4714]: I0129 16:12:55.745431 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-m2g9h" Jan 29 16:12:55 crc kubenswrapper[4714]: I0129 16:12:55.751637 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-m2g9h" Jan 29 16:12:55 crc kubenswrapper[4714]: I0129 16:12:55.800509 4714 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn75b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 29 16:12:55 crc kubenswrapper[4714]: I0129 16:12:55.800593 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fn75b" podUID="42b66dc3-a385-4350-a943-50f062da35f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 29 16:12:56 crc kubenswrapper[4714]: I0129 16:12:56.892577 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pbrmk" Jan 29 16:12:57 crc kubenswrapper[4714]: I0129 16:12:57.844721 4714 patch_prober.go:28] interesting pod/machine-config-daemon-ppngk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:12:57 crc kubenswrapper[4714]: I0129 16:12:57.845249 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:13:03 crc kubenswrapper[4714]: I0129 16:13:03.046092 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:05 crc kubenswrapper[4714]: I0129 16:13:05.795252 4714 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn75b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 29 16:13:05 crc kubenswrapper[4714]: I0129 16:13:05.795325 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fn75b" podUID="42b66dc3-a385-4350-a943-50f062da35f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 29 16:13:06 crc kubenswrapper[4714]: I0129 16:13:06.818150 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 16:13:06 crc kubenswrapper[4714]: E0129 16:13:06.818758 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58" containerName="pruner" Jan 29 16:13:06 crc kubenswrapper[4714]: I0129 16:13:06.818774 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58" containerName="pruner" Jan 29 16:13:06 crc kubenswrapper[4714]: I0129 16:13:06.818923 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="01f3f98d-08b5-47f1-b0a9-8bdb0ad67d58" containerName="pruner" Jan 29 16:13:06 crc kubenswrapper[4714]: I0129 16:13:06.819435 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:13:06 crc kubenswrapper[4714]: I0129 16:13:06.823359 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 16:13:06 crc kubenswrapper[4714]: I0129 16:13:06.823726 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 16:13:06 crc kubenswrapper[4714]: I0129 16:13:06.832059 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 16:13:06 crc kubenswrapper[4714]: I0129 16:13:06.921185 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:13:06 crc kubenswrapper[4714]: I0129 16:13:06.921236 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:13:07 crc kubenswrapper[4714]: I0129 16:13:07.022588 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:13:07 crc kubenswrapper[4714]: I0129 16:13:07.022670 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:13:07 crc kubenswrapper[4714]: I0129 16:13:07.022764 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:13:07 crc kubenswrapper[4714]: I0129 16:13:07.057455 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:13:07 crc kubenswrapper[4714]: I0129 16:13:07.198159 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:13:08 crc kubenswrapper[4714]: E0129 16:13:08.419084 4714 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage1649454453/4\": happened during read: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:13:08 crc kubenswrapper[4714]: E0129 16:13:08.419572 4714 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gks4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-6bjgq_openshift-marketplace(98a35d03-ef3b-4341-9866-56d12a28aee3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage1649454453/4\": happened during read: context canceled" logger="UnhandledError" Jan 29 16:13:08 crc kubenswrapper[4714]: E0129 16:13:08.421724 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \\\"/var/tmp/container_images_storage1649454453/4\\\": happened during read: context canceled\"" pod="openshift-marketplace/community-operators-6bjgq" podUID="98a35d03-ef3b-4341-9866-56d12a28aee3" Jan 29 16:13:11 crc kubenswrapper[4714]: E0129 16:13:11.753036 4714 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:13:11 crc kubenswrapper[4714]: E0129 16:13:11.753227 4714 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9dv2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-dbvgp_openshift-marketplace(ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 16:13:11 crc kubenswrapper[4714]: E0129 16:13:11.754443 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-dbvgp" podUID="ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef" Jan 29 16:13:12 crc kubenswrapper[4714]: I0129 16:13:12.012866 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 16:13:12 crc kubenswrapper[4714]: I0129 16:13:12.013740 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:13:12 crc kubenswrapper[4714]: I0129 16:13:12.037478 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 16:13:12 crc kubenswrapper[4714]: I0129 16:13:12.126714 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24-kube-api-access\") pod \"installer-9-crc\" (UID: \"9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:13:12 crc kubenswrapper[4714]: I0129 16:13:12.127063 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:13:12 crc kubenswrapper[4714]: I0129 16:13:12.127252 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24-var-lock\") pod \"installer-9-crc\" (UID: \"9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:13:12 crc kubenswrapper[4714]: I0129 16:13:12.228752 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:13:12 crc kubenswrapper[4714]: I0129 16:13:12.228801 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24-var-lock\") pod \"installer-9-crc\" (UID: \"9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:13:12 crc kubenswrapper[4714]: I0129 16:13:12.228837 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24-kube-api-access\") pod \"installer-9-crc\" (UID: \"9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:13:12 crc kubenswrapper[4714]: I0129 16:13:12.228913 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24-var-lock\") pod \"installer-9-crc\" (UID: \"9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:13:12 crc kubenswrapper[4714]: I0129 16:13:12.229004 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:13:12 crc kubenswrapper[4714]: I0129 16:13:12.245984 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24-kube-api-access\") pod \"installer-9-crc\" (UID: \"9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:13:12 crc kubenswrapper[4714]: E0129 16:13:12.328112 4714 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:13:12 crc kubenswrapper[4714]: E0129 16:13:12.328271 4714 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8bfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-74twj_openshift-marketplace(a97ed1ff-657f-4bde-943b-78caf9d07f92): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 16:13:12 crc kubenswrapper[4714]: E0129 16:13:12.329826 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-74twj" podUID="a97ed1ff-657f-4bde-943b-78caf9d07f92" Jan 29 16:13:12 crc kubenswrapper[4714]: I0129 16:13:12.345662 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:13:12 crc kubenswrapper[4714]: E0129 16:13:12.642136 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-6bjgq" podUID="98a35d03-ef3b-4341-9866-56d12a28aee3" Jan 29 16:13:12 crc kubenswrapper[4714]: E0129 16:13:12.642174 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-dbvgp" podUID="ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef" Jan 29 16:13:12 crc kubenswrapper[4714]: E0129 16:13:12.754346 4714 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:13:12 crc kubenswrapper[4714]: E0129 16:13:12.754541 4714 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m4jjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-lb68h_openshift-marketplace(d05e7c79-7d66-4453-aedb-f240784ff294): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 16:13:12 crc kubenswrapper[4714]: E0129 16:13:12.755734 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-lb68h" podUID="d05e7c79-7d66-4453-aedb-f240784ff294" Jan 29 16:13:13 crc kubenswrapper[4714]: E0129 16:13:13.494314 4714 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:94d1bfc77428a945334e81bab025286e1fb0c1323b3aa1395b0c2f8e42153686: Get \"https://registry.redhat.io/v2/redhat/community-operator-index/blobs/sha256:94d1bfc77428a945334e81bab025286e1fb0c1323b3aa1395b0c2f8e42153686\": context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:13:13 crc kubenswrapper[4714]: E0129 16:13:13.494622 4714 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zbn22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-xtr82_openshift-marketplace(11a30de8-b234-47b4-8fd0-44f0c428be78): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:94d1bfc77428a945334e81bab025286e1fb0c1323b3aa1395b0c2f8e42153686: Get \"https://registry.redhat.io/v2/redhat/community-operator-index/blobs/sha256:94d1bfc77428a945334e81bab025286e1fb0c1323b3aa1395b0c2f8e42153686\": context canceled" logger="UnhandledError" Jan 29 16:13:13 crc kubenswrapper[4714]: E0129 16:13:13.496439 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:94d1bfc77428a945334e81bab025286e1fb0c1323b3aa1395b0c2f8e42153686: Get \\\"https://registry.redhat.io/v2/redhat/community-operator-index/blobs/sha256:94d1bfc77428a945334e81bab025286e1fb0c1323b3aa1395b0c2f8e42153686\\\": context canceled\"" pod="openshift-marketplace/community-operators-xtr82" podUID="11a30de8-b234-47b4-8fd0-44f0c428be78" Jan 29 16:13:15 crc kubenswrapper[4714]: I0129 16:13:15.797170 4714 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn75b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 29 16:13:15 crc kubenswrapper[4714]: I0129 16:13:15.797277 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fn75b" podUID="42b66dc3-a385-4350-a943-50f062da35f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 29 16:13:16 crc kubenswrapper[4714]: E0129 16:13:16.356700 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-74twj" podUID="a97ed1ff-657f-4bde-943b-78caf9d07f92" Jan 29 16:13:16 crc kubenswrapper[4714]: E0129 16:13:16.357927 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-lb68h" podUID="d05e7c79-7d66-4453-aedb-f240784ff294" Jan 29 16:13:16 crc kubenswrapper[4714]: E0129 16:13:16.357889 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xtr82" podUID="11a30de8-b234-47b4-8fd0-44f0c428be78" Jan 29 16:13:16 crc kubenswrapper[4714]: E0129 16:13:16.397169 4714 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:13:16 crc kubenswrapper[4714]: E0129 16:13:16.397334 4714 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xpzw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-kg9qt_openshift-marketplace(ec0eba7e-2ea0-432f-bc57-d87404801abe): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 16:13:16 crc kubenswrapper[4714]: E0129 16:13:16.398718 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-kg9qt" podUID="ec0eba7e-2ea0-432f-bc57-d87404801abe" Jan 29 16:13:16 crc kubenswrapper[4714]: E0129 16:13:16.430723 4714 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:13:16 crc kubenswrapper[4714]: E0129 16:13:16.430873 4714 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ntklh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-nssrv_openshift-marketplace(eae853ba-61c9-439b-9dc9-21567075f18a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 16:13:16 crc kubenswrapper[4714]: E0129 16:13:16.432132 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-nssrv" podUID="eae853ba-61c9-439b-9dc9-21567075f18a" Jan 29 16:13:16 crc kubenswrapper[4714]: E0129 16:13:16.444108 4714 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:13:16 crc kubenswrapper[4714]: E0129 16:13:16.444299 4714 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pkm4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-hbcnj_openshift-marketplace(213a402c-b327-4aa6-9690-22d6da8664a4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 16:13:16 crc kubenswrapper[4714]: E0129 16:13:16.445671 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-hbcnj" podUID="213a402c-b327-4aa6-9690-22d6da8664a4" Jan 29 16:13:16 crc kubenswrapper[4714]: I0129 16:13:16.838219 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 16:13:16 crc kubenswrapper[4714]: I0129 16:13:16.867528 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fn75b" event={"ID":"42b66dc3-a385-4350-a943-50f062da35f7","Type":"ContainerStarted","Data":"d3f1419598c4245050212acd4731aa1926279689462b7e83b132ac3c6307471d"} Jan 29 16:13:16 crc kubenswrapper[4714]: I0129 16:13:16.868407 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-fn75b" Jan 29 16:13:16 crc kubenswrapper[4714]: I0129 16:13:16.868472 4714 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn75b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 29 16:13:16 crc kubenswrapper[4714]: I0129 16:13:16.868496 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fn75b" podUID="42b66dc3-a385-4350-a943-50f062da35f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 29 16:13:16 crc kubenswrapper[4714]: I0129 16:13:16.872555 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24","Type":"ContainerStarted","Data":"c9979ec9683e16f0b3aadc58f285988aa9a20624d0e8ae1b706d16b4bb36c291"} Jan 29 16:13:16 crc kubenswrapper[4714]: I0129 16:13:16.877520 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2w92b" event={"ID":"791456e8-8d95-4cdb-8fd1-d06a7586b328","Type":"ContainerStarted","Data":"e44ed67e5705bbd4a9aa4ca26f7b537b9562b4aa728317770efcfa94c9759009"} Jan 29 16:13:16 crc kubenswrapper[4714]: I0129 16:13:16.910657 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 16:13:16 crc kubenswrapper[4714]: I0129 16:13:16.929476 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-2w92b" podStartSLOduration=182.929444434 podStartE2EDuration="3m2.929444434s" podCreationTimestamp="2026-01-29 16:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:13:16.929401543 +0000 UTC m=+203.449902663" watchObservedRunningTime="2026-01-29 16:13:16.929444434 +0000 UTC m=+203.449945554" Jan 29 16:13:17 crc kubenswrapper[4714]: I0129 16:13:17.886796 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb","Type":"ContainerStarted","Data":"fea6462f2633259cc5627c9d340316b80f17acfb1fb2132675b377437533a003"} Jan 29 16:13:17 crc kubenswrapper[4714]: I0129 16:13:17.887752 4714 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn75b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 29 16:13:17 crc kubenswrapper[4714]: I0129 16:13:17.887823 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fn75b" podUID="42b66dc3-a385-4350-a943-50f062da35f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 29 16:13:18 crc kubenswrapper[4714]: I0129 16:13:18.898883 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24","Type":"ContainerStarted","Data":"d7f3c5ef1cf90e64e41a56b19a6b215a0e1a7265f9a460a77da807a32640c2c8"} Jan 29 16:13:18 crc kubenswrapper[4714]: I0129 16:13:18.903872 4714 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn75b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 29 16:13:18 crc kubenswrapper[4714]: I0129 16:13:18.904015 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fn75b" podUID="42b66dc3-a385-4350-a943-50f062da35f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 29 16:13:18 crc kubenswrapper[4714]: I0129 16:13:18.904467 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb","Type":"ContainerStarted","Data":"f4c0a8611fe19893c08801182acc09882f18c15576bc5e70f9010e8259dff15b"} Jan 29 16:13:18 crc kubenswrapper[4714]: I0129 16:13:18.933348 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=6.933321122 podStartE2EDuration="6.933321122s" podCreationTimestamp="2026-01-29 16:13:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:13:18.926207718 +0000 UTC m=+205.446708878" watchObservedRunningTime="2026-01-29 16:13:18.933321122 +0000 UTC m=+205.453822282" Jan 29 16:13:18 crc kubenswrapper[4714]: I0129 16:13:18.959189 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=12.9591554 podStartE2EDuration="12.9591554s" podCreationTimestamp="2026-01-29 16:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:13:18.953165729 +0000 UTC m=+205.473666859" watchObservedRunningTime="2026-01-29 16:13:18.9591554 +0000 UTC m=+205.479656530" Jan 29 16:13:19 crc kubenswrapper[4714]: I0129 16:13:19.915199 4714 generic.go:334] "Generic (PLEG): container finished" podID="c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb" containerID="f4c0a8611fe19893c08801182acc09882f18c15576bc5e70f9010e8259dff15b" exitCode=0 Jan 29 16:13:19 crc kubenswrapper[4714]: I0129 16:13:19.915450 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb","Type":"ContainerDied","Data":"f4c0a8611fe19893c08801182acc09882f18c15576bc5e70f9010e8259dff15b"} Jan 29 16:13:21 crc kubenswrapper[4714]: I0129 16:13:21.202714 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:13:21 crc kubenswrapper[4714]: I0129 16:13:21.381066 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb-kube-api-access\") pod \"c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb\" (UID: \"c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb\") " Jan 29 16:13:21 crc kubenswrapper[4714]: I0129 16:13:21.382119 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb-kubelet-dir\") pod \"c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb\" (UID: \"c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb\") " Jan 29 16:13:21 crc kubenswrapper[4714]: I0129 16:13:21.382348 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb" (UID: "c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:13:21 crc kubenswrapper[4714]: I0129 16:13:21.388958 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb" (UID: "c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:13:21 crc kubenswrapper[4714]: I0129 16:13:21.483850 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:13:21 crc kubenswrapper[4714]: I0129 16:13:21.483902 4714 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:13:21 crc kubenswrapper[4714]: I0129 16:13:21.932450 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb","Type":"ContainerDied","Data":"fea6462f2633259cc5627c9d340316b80f17acfb1fb2132675b377437533a003"} Jan 29 16:13:21 crc kubenswrapper[4714]: I0129 16:13:21.932524 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fea6462f2633259cc5627c9d340316b80f17acfb1fb2132675b377437533a003" Jan 29 16:13:21 crc kubenswrapper[4714]: I0129 16:13:21.932590 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:13:25 crc kubenswrapper[4714]: I0129 16:13:25.796228 4714 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn75b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 29 16:13:25 crc kubenswrapper[4714]: I0129 16:13:25.796644 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fn75b" podUID="42b66dc3-a385-4350-a943-50f062da35f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 29 16:13:25 crc kubenswrapper[4714]: I0129 16:13:25.796265 4714 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn75b container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 29 16:13:25 crc kubenswrapper[4714]: I0129 16:13:25.796766 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fn75b" podUID="42b66dc3-a385-4350-a943-50f062da35f7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 29 16:13:27 crc kubenswrapper[4714]: I0129 16:13:27.844092 4714 patch_prober.go:28] interesting pod/machine-config-daemon-ppngk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:13:27 crc kubenswrapper[4714]: I0129 16:13:27.844529 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:13:27 crc kubenswrapper[4714]: I0129 16:13:27.844687 4714 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" Jan 29 16:13:27 crc kubenswrapper[4714]: I0129 16:13:27.845749 4714 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5"} pod="openshift-machine-config-operator/machine-config-daemon-ppngk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:13:27 crc kubenswrapper[4714]: I0129 16:13:27.845882 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" containerID="cri-o://27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5" gracePeriod=600 Jan 29 16:13:29 crc kubenswrapper[4714]: I0129 16:13:29.987095 4714 generic.go:334] "Generic (PLEG): container finished" podID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerID="27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5" exitCode=0 Jan 29 16:13:29 crc kubenswrapper[4714]: I0129 16:13:29.987156 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" event={"ID":"c8c765f3-89eb-4077-8829-03e86eb0c90c","Type":"ContainerDied","Data":"27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5"} Jan 29 16:13:35 crc kubenswrapper[4714]: I0129 16:13:35.029950 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" event={"ID":"c8c765f3-89eb-4077-8829-03e86eb0c90c","Type":"ContainerStarted","Data":"32d59d2a4eb095db60ac3365411265035b47b1f01d164e950c86daa5aecb2792"} Jan 29 16:13:35 crc kubenswrapper[4714]: I0129 16:13:35.038827 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbvgp" event={"ID":"ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef","Type":"ContainerDied","Data":"e4723c2a15106e7cb0b50986860a7a558bd1c3ece5b50a3175d026187fce0bd7"} Jan 29 16:13:35 crc kubenswrapper[4714]: I0129 16:13:35.031766 4714 generic.go:334] "Generic (PLEG): container finished" podID="ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef" containerID="e4723c2a15106e7cb0b50986860a7a558bd1c3ece5b50a3175d026187fce0bd7" exitCode=0 Jan 29 16:13:35 crc kubenswrapper[4714]: I0129 16:13:35.819248 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-fn75b" Jan 29 16:13:36 crc kubenswrapper[4714]: I0129 16:13:36.050655 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kg9qt" event={"ID":"ec0eba7e-2ea0-432f-bc57-d87404801abe","Type":"ContainerStarted","Data":"70619171b98977a9d286bd513ce901776ee376ec0c300ee22dba9b847da868b1"} Jan 29 16:13:36 crc kubenswrapper[4714]: I0129 16:13:36.052895 4714 generic.go:334] "Generic (PLEG): container finished" podID="213a402c-b327-4aa6-9690-22d6da8664a4" containerID="30c0abcdea89377220aa9b27f7c4d0c14e091ab28605ec484a71ffefcd4d36ba" exitCode=0 Jan 29 16:13:36 crc kubenswrapper[4714]: I0129 16:13:36.052987 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hbcnj" event={"ID":"213a402c-b327-4aa6-9690-22d6da8664a4","Type":"ContainerDied","Data":"30c0abcdea89377220aa9b27f7c4d0c14e091ab28605ec484a71ffefcd4d36ba"} Jan 29 16:13:36 crc kubenswrapper[4714]: I0129 16:13:36.058728 4714 generic.go:334] "Generic (PLEG): container finished" podID="a97ed1ff-657f-4bde-943b-78caf9d07f92" containerID="b55a4a53101476467f28cefb11eb7b554ba1847628876fc42f62a75c8730a4f6" exitCode=0 Jan 29 16:13:36 crc kubenswrapper[4714]: I0129 16:13:36.058823 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74twj" event={"ID":"a97ed1ff-657f-4bde-943b-78caf9d07f92","Type":"ContainerDied","Data":"b55a4a53101476467f28cefb11eb7b554ba1847628876fc42f62a75c8730a4f6"} Jan 29 16:13:36 crc kubenswrapper[4714]: I0129 16:13:36.062480 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbvgp" event={"ID":"ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef","Type":"ContainerStarted","Data":"4d3b6ad01b47ee4415724a7f3a339855145f38f464bbba6afefebc62453be6ce"} Jan 29 16:13:36 crc kubenswrapper[4714]: I0129 16:13:36.064918 4714 generic.go:334] "Generic (PLEG): container finished" podID="d05e7c79-7d66-4453-aedb-f240784ff294" containerID="dc4ea141b6a80ae098de7ca5a17e9e2b1ec3ebf2112f640c6fad4d5fcc75a51c" exitCode=0 Jan 29 16:13:36 crc kubenswrapper[4714]: I0129 16:13:36.065028 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb68h" event={"ID":"d05e7c79-7d66-4453-aedb-f240784ff294","Type":"ContainerDied","Data":"dc4ea141b6a80ae098de7ca5a17e9e2b1ec3ebf2112f640c6fad4d5fcc75a51c"} Jan 29 16:13:36 crc kubenswrapper[4714]: I0129 16:13:36.075241 4714 generic.go:334] "Generic (PLEG): container finished" podID="eae853ba-61c9-439b-9dc9-21567075f18a" containerID="af26cf3e86e4001d9e7f83e8aa3b6ea940d03f589aa1d1907c488eb1df46568b" exitCode=0 Jan 29 16:13:36 crc kubenswrapper[4714]: I0129 16:13:36.076257 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nssrv" event={"ID":"eae853ba-61c9-439b-9dc9-21567075f18a","Type":"ContainerDied","Data":"af26cf3e86e4001d9e7f83e8aa3b6ea940d03f589aa1d1907c488eb1df46568b"} Jan 29 16:13:36 crc kubenswrapper[4714]: I0129 16:13:36.108617 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dbvgp" podStartSLOduration=4.12663362 podStartE2EDuration="1m9.108598348s" podCreationTimestamp="2026-01-29 16:12:27 +0000 UTC" firstStartedPulling="2026-01-29 16:12:30.511213659 +0000 UTC m=+157.031714789" lastFinishedPulling="2026-01-29 16:13:35.493178397 +0000 UTC m=+222.013679517" observedRunningTime="2026-01-29 16:13:36.105593762 +0000 UTC m=+222.626094882" watchObservedRunningTime="2026-01-29 16:13:36.108598348 +0000 UTC m=+222.629099458" Jan 29 16:13:37 crc kubenswrapper[4714]: I0129 16:13:37.089283 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb68h" event={"ID":"d05e7c79-7d66-4453-aedb-f240784ff294","Type":"ContainerStarted","Data":"25adf524f9c5473bfa242fd63827380ee76b9012a8eb32666464c503ff1ae5f5"} Jan 29 16:13:37 crc kubenswrapper[4714]: I0129 16:13:37.093231 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nssrv" event={"ID":"eae853ba-61c9-439b-9dc9-21567075f18a","Type":"ContainerStarted","Data":"52559e4420e032272c3033326a96872ff005d794cf95b2dce2bffa130b6cf392"} Jan 29 16:13:37 crc kubenswrapper[4714]: I0129 16:13:37.095509 4714 generic.go:334] "Generic (PLEG): container finished" podID="ec0eba7e-2ea0-432f-bc57-d87404801abe" containerID="70619171b98977a9d286bd513ce901776ee376ec0c300ee22dba9b847da868b1" exitCode=0 Jan 29 16:13:37 crc kubenswrapper[4714]: I0129 16:13:37.095543 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kg9qt" event={"ID":"ec0eba7e-2ea0-432f-bc57-d87404801abe","Type":"ContainerDied","Data":"70619171b98977a9d286bd513ce901776ee376ec0c300ee22dba9b847da868b1"} Jan 29 16:13:37 crc kubenswrapper[4714]: I0129 16:13:37.095557 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kg9qt" event={"ID":"ec0eba7e-2ea0-432f-bc57-d87404801abe","Type":"ContainerStarted","Data":"280fd3af9e55869b591af38324a50ac2f821ce1422ef0def509f857f44fa4f86"} Jan 29 16:13:37 crc kubenswrapper[4714]: I0129 16:13:37.101420 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hbcnj" event={"ID":"213a402c-b327-4aa6-9690-22d6da8664a4","Type":"ContainerStarted","Data":"2c317c97caab5fe2de4c3ba52a485756258217cc8c0f8aff08364a47758a8598"} Jan 29 16:13:37 crc kubenswrapper[4714]: I0129 16:13:37.103427 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74twj" event={"ID":"a97ed1ff-657f-4bde-943b-78caf9d07f92","Type":"ContainerStarted","Data":"0725e5fe8581c9f5ab88fa8ad4af11d5f996f4972602031ec88175b5fda32f79"} Jan 29 16:13:37 crc kubenswrapper[4714]: I0129 16:13:37.111773 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lb68h" podStartSLOduration=4.00847504 podStartE2EDuration="1m8.111761981s" podCreationTimestamp="2026-01-29 16:12:29 +0000 UTC" firstStartedPulling="2026-01-29 16:12:32.553203997 +0000 UTC m=+159.073705117" lastFinishedPulling="2026-01-29 16:13:36.656490938 +0000 UTC m=+223.176992058" observedRunningTime="2026-01-29 16:13:37.110128805 +0000 UTC m=+223.630629925" watchObservedRunningTime="2026-01-29 16:13:37.111761981 +0000 UTC m=+223.632263101" Jan 29 16:13:37 crc kubenswrapper[4714]: I0129 16:13:37.128305 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kg9qt" podStartSLOduration=3.911234839 podStartE2EDuration="1m7.128284414s" podCreationTimestamp="2026-01-29 16:12:30 +0000 UTC" firstStartedPulling="2026-01-29 16:12:33.568827221 +0000 UTC m=+160.089328351" lastFinishedPulling="2026-01-29 16:13:36.785876796 +0000 UTC m=+223.306377926" observedRunningTime="2026-01-29 16:13:37.126827742 +0000 UTC m=+223.647328862" watchObservedRunningTime="2026-01-29 16:13:37.128284414 +0000 UTC m=+223.648785524" Jan 29 16:13:37 crc kubenswrapper[4714]: I0129 16:13:37.145835 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nssrv" podStartSLOduration=3.9601176909999998 podStartE2EDuration="1m9.145819895s" podCreationTimestamp="2026-01-29 16:12:28 +0000 UTC" firstStartedPulling="2026-01-29 16:12:31.553403336 +0000 UTC m=+158.073904456" lastFinishedPulling="2026-01-29 16:13:36.73910554 +0000 UTC m=+223.259606660" observedRunningTime="2026-01-29 16:13:37.14423724 +0000 UTC m=+223.664738360" watchObservedRunningTime="2026-01-29 16:13:37.145819895 +0000 UTC m=+223.666321005" Jan 29 16:13:37 crc kubenswrapper[4714]: I0129 16:13:37.162552 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hbcnj" podStartSLOduration=3.174430813 podStartE2EDuration="1m8.162538483s" podCreationTimestamp="2026-01-29 16:12:29 +0000 UTC" firstStartedPulling="2026-01-29 16:12:31.546307129 +0000 UTC m=+158.066808249" lastFinishedPulling="2026-01-29 16:13:36.534414799 +0000 UTC m=+223.054915919" observedRunningTime="2026-01-29 16:13:37.161887854 +0000 UTC m=+223.682388974" watchObservedRunningTime="2026-01-29 16:13:37.162538483 +0000 UTC m=+223.683039603" Jan 29 16:13:37 crc kubenswrapper[4714]: I0129 16:13:37.184281 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-74twj" podStartSLOduration=5.187567452 podStartE2EDuration="1m11.184265144s" podCreationTimestamp="2026-01-29 16:12:26 +0000 UTC" firstStartedPulling="2026-01-29 16:12:30.511137197 +0000 UTC m=+157.031638337" lastFinishedPulling="2026-01-29 16:13:36.507834909 +0000 UTC m=+223.028336029" observedRunningTime="2026-01-29 16:13:37.180411814 +0000 UTC m=+223.700912954" watchObservedRunningTime="2026-01-29 16:13:37.184265144 +0000 UTC m=+223.704766254" Jan 29 16:13:37 crc kubenswrapper[4714]: I0129 16:13:37.340470 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-74twj" Jan 29 16:13:37 crc kubenswrapper[4714]: I0129 16:13:37.340532 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-74twj" Jan 29 16:13:37 crc kubenswrapper[4714]: I0129 16:13:37.701420 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dbvgp" Jan 29 16:13:37 crc kubenswrapper[4714]: I0129 16:13:37.701755 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dbvgp" Jan 29 16:13:38 crc kubenswrapper[4714]: I0129 16:13:38.522635 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-74twj" podUID="a97ed1ff-657f-4bde-943b-78caf9d07f92" containerName="registry-server" probeResult="failure" output=< Jan 29 16:13:38 crc kubenswrapper[4714]: timeout: failed to connect service ":50051" within 1s Jan 29 16:13:38 crc kubenswrapper[4714]: > Jan 29 16:13:38 crc kubenswrapper[4714]: I0129 16:13:38.737979 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-dbvgp" podUID="ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef" containerName="registry-server" probeResult="failure" output=< Jan 29 16:13:38 crc kubenswrapper[4714]: timeout: failed to connect service ":50051" within 1s Jan 29 16:13:38 crc kubenswrapper[4714]: > Jan 29 16:13:39 crc kubenswrapper[4714]: I0129 16:13:39.063470 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nssrv" Jan 29 16:13:39 crc kubenswrapper[4714]: I0129 16:13:39.063809 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nssrv" Jan 29 16:13:39 crc kubenswrapper[4714]: I0129 16:13:39.116261 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nssrv" Jan 29 16:13:39 crc kubenswrapper[4714]: I0129 16:13:39.463648 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hbcnj" Jan 29 16:13:39 crc kubenswrapper[4714]: I0129 16:13:39.463700 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hbcnj" Jan 29 16:13:39 crc kubenswrapper[4714]: I0129 16:13:39.513564 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hbcnj" Jan 29 16:13:40 crc kubenswrapper[4714]: I0129 16:13:40.263314 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lb68h" Jan 29 16:13:40 crc kubenswrapper[4714]: I0129 16:13:40.264710 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lb68h" Jan 29 16:13:40 crc kubenswrapper[4714]: I0129 16:13:40.706304 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kg9qt" Jan 29 16:13:40 crc kubenswrapper[4714]: I0129 16:13:40.706863 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kg9qt" Jan 29 16:13:41 crc kubenswrapper[4714]: I0129 16:13:41.307021 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lb68h" podUID="d05e7c79-7d66-4453-aedb-f240784ff294" containerName="registry-server" probeResult="failure" output=< Jan 29 16:13:41 crc kubenswrapper[4714]: timeout: failed to connect service ":50051" within 1s Jan 29 16:13:41 crc kubenswrapper[4714]: > Jan 29 16:13:41 crc kubenswrapper[4714]: I0129 16:13:41.758184 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kg9qt" podUID="ec0eba7e-2ea0-432f-bc57-d87404801abe" containerName="registry-server" probeResult="failure" output=< Jan 29 16:13:41 crc kubenswrapper[4714]: timeout: failed to connect service ":50051" within 1s Jan 29 16:13:41 crc kubenswrapper[4714]: > Jan 29 16:13:44 crc kubenswrapper[4714]: I0129 16:13:44.151380 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xtr82" event={"ID":"11a30de8-b234-47b4-8fd0-44f0c428be78","Type":"ContainerStarted","Data":"8a54a45940dea4f60292ad1cc53a3fa404d30031632cdd928c54b5c498a0d947"} Jan 29 16:13:44 crc kubenswrapper[4714]: I0129 16:13:44.156264 4714 generic.go:334] "Generic (PLEG): container finished" podID="98a35d03-ef3b-4341-9866-56d12a28aee3" containerID="aaa814f95896ac1eaedc7b86e2197c8140345f5296d9cb5ecb6a3dac9cee5ab6" exitCode=0 Jan 29 16:13:44 crc kubenswrapper[4714]: I0129 16:13:44.156312 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6bjgq" event={"ID":"98a35d03-ef3b-4341-9866-56d12a28aee3","Type":"ContainerDied","Data":"aaa814f95896ac1eaedc7b86e2197c8140345f5296d9cb5ecb6a3dac9cee5ab6"} Jan 29 16:13:45 crc kubenswrapper[4714]: I0129 16:13:45.168689 4714 generic.go:334] "Generic (PLEG): container finished" podID="11a30de8-b234-47b4-8fd0-44f0c428be78" containerID="8a54a45940dea4f60292ad1cc53a3fa404d30031632cdd928c54b5c498a0d947" exitCode=0 Jan 29 16:13:45 crc kubenswrapper[4714]: I0129 16:13:45.168845 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xtr82" event={"ID":"11a30de8-b234-47b4-8fd0-44f0c428be78","Type":"ContainerDied","Data":"8a54a45940dea4f60292ad1cc53a3fa404d30031632cdd928c54b5c498a0d947"} Jan 29 16:13:45 crc kubenswrapper[4714]: I0129 16:13:45.175748 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6bjgq" event={"ID":"98a35d03-ef3b-4341-9866-56d12a28aee3","Type":"ContainerStarted","Data":"7d09f4f35a658993558ef15a141e74b57af657ad6538fc1eecadfe107e507595"} Jan 29 16:13:45 crc kubenswrapper[4714]: I0129 16:13:45.217267 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6bjgq" podStartSLOduration=4.146413642 podStartE2EDuration="1m18.21723727s" podCreationTimestamp="2026-01-29 16:12:27 +0000 UTC" firstStartedPulling="2026-01-29 16:12:30.509961651 +0000 UTC m=+157.030462781" lastFinishedPulling="2026-01-29 16:13:44.580785289 +0000 UTC m=+231.101286409" observedRunningTime="2026-01-29 16:13:45.214593364 +0000 UTC m=+231.735094524" watchObservedRunningTime="2026-01-29 16:13:45.21723727 +0000 UTC m=+231.737738430" Jan 29 16:13:46 crc kubenswrapper[4714]: I0129 16:13:46.194143 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xtr82" event={"ID":"11a30de8-b234-47b4-8fd0-44f0c428be78","Type":"ContainerStarted","Data":"a2c143c1c72b06e6085dcb21f057a3e45d817ac3a8bf8d7e3516db54610f130e"} Jan 29 16:13:46 crc kubenswrapper[4714]: I0129 16:13:46.214691 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xtr82" podStartSLOduration=4.97714805 podStartE2EDuration="1m20.21466974s" podCreationTimestamp="2026-01-29 16:12:26 +0000 UTC" firstStartedPulling="2026-01-29 16:12:30.527830307 +0000 UTC m=+157.048331427" lastFinishedPulling="2026-01-29 16:13:45.765351997 +0000 UTC m=+232.285853117" observedRunningTime="2026-01-29 16:13:46.213429924 +0000 UTC m=+232.733931034" watchObservedRunningTime="2026-01-29 16:13:46.21466974 +0000 UTC m=+232.735170860" Jan 29 16:13:47 crc kubenswrapper[4714]: I0129 16:13:47.062528 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xtr82" Jan 29 16:13:47 crc kubenswrapper[4714]: I0129 16:13:47.062784 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xtr82" Jan 29 16:13:47 crc kubenswrapper[4714]: I0129 16:13:47.401755 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-74twj" Jan 29 16:13:47 crc kubenswrapper[4714]: I0129 16:13:47.447305 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-74twj" Jan 29 16:13:47 crc kubenswrapper[4714]: I0129 16:13:47.528302 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6bjgq" Jan 29 16:13:47 crc kubenswrapper[4714]: I0129 16:13:47.528355 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6bjgq" Jan 29 16:13:47 crc kubenswrapper[4714]: I0129 16:13:47.563418 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6bjgq" Jan 29 16:13:47 crc kubenswrapper[4714]: I0129 16:13:47.749881 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dbvgp" Jan 29 16:13:47 crc kubenswrapper[4714]: I0129 16:13:47.805554 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dbvgp" Jan 29 16:13:48 crc kubenswrapper[4714]: I0129 16:13:48.125316 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-xtr82" podUID="11a30de8-b234-47b4-8fd0-44f0c428be78" containerName="registry-server" probeResult="failure" output=< Jan 29 16:13:48 crc kubenswrapper[4714]: timeout: failed to connect service ":50051" within 1s Jan 29 16:13:48 crc kubenswrapper[4714]: > Jan 29 16:13:49 crc kubenswrapper[4714]: I0129 16:13:49.119762 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nssrv" Jan 29 16:13:49 crc kubenswrapper[4714]: I0129 16:13:49.192597 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dbvgp"] Jan 29 16:13:49 crc kubenswrapper[4714]: I0129 16:13:49.208180 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dbvgp" podUID="ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef" containerName="registry-server" containerID="cri-o://4d3b6ad01b47ee4415724a7f3a339855145f38f464bbba6afefebc62453be6ce" gracePeriod=2 Jan 29 16:13:49 crc kubenswrapper[4714]: I0129 16:13:49.544287 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hbcnj" Jan 29 16:13:50 crc kubenswrapper[4714]: I0129 16:13:50.216574 4714 generic.go:334] "Generic (PLEG): container finished" podID="ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef" containerID="4d3b6ad01b47ee4415724a7f3a339855145f38f464bbba6afefebc62453be6ce" exitCode=0 Jan 29 16:13:50 crc kubenswrapper[4714]: I0129 16:13:50.216611 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbvgp" event={"ID":"ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef","Type":"ContainerDied","Data":"4d3b6ad01b47ee4415724a7f3a339855145f38f464bbba6afefebc62453be6ce"} Jan 29 16:13:50 crc kubenswrapper[4714]: I0129 16:13:50.325950 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lb68h" Jan 29 16:13:50 crc kubenswrapper[4714]: I0129 16:13:50.389958 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lb68h" Jan 29 16:13:50 crc kubenswrapper[4714]: I0129 16:13:50.670123 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dbvgp" Jan 29 16:13:50 crc kubenswrapper[4714]: I0129 16:13:50.703382 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dv2v\" (UniqueName: \"kubernetes.io/projected/ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef-kube-api-access-9dv2v\") pod \"ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef\" (UID: \"ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef\") " Jan 29 16:13:50 crc kubenswrapper[4714]: I0129 16:13:50.703454 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef-utilities\") pod \"ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef\" (UID: \"ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef\") " Jan 29 16:13:50 crc kubenswrapper[4714]: I0129 16:13:50.703851 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef-catalog-content\") pod \"ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef\" (UID: \"ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef\") " Jan 29 16:13:50 crc kubenswrapper[4714]: I0129 16:13:50.704573 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef-utilities" (OuterVolumeSpecName: "utilities") pod "ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef" (UID: "ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:13:50 crc kubenswrapper[4714]: I0129 16:13:50.711447 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef-kube-api-access-9dv2v" (OuterVolumeSpecName: "kube-api-access-9dv2v") pod "ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef" (UID: "ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef"). InnerVolumeSpecName "kube-api-access-9dv2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:13:50 crc kubenswrapper[4714]: I0129 16:13:50.750182 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef" (UID: "ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:13:50 crc kubenswrapper[4714]: I0129 16:13:50.758326 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kg9qt" Jan 29 16:13:50 crc kubenswrapper[4714]: I0129 16:13:50.801080 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kg9qt" Jan 29 16:13:50 crc kubenswrapper[4714]: I0129 16:13:50.805888 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dv2v\" (UniqueName: \"kubernetes.io/projected/ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef-kube-api-access-9dv2v\") on node \"crc\" DevicePath \"\"" Jan 29 16:13:50 crc kubenswrapper[4714]: I0129 16:13:50.805983 4714 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:13:50 crc kubenswrapper[4714]: I0129 16:13:50.805999 4714 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:13:51 crc kubenswrapper[4714]: I0129 16:13:51.226553 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbvgp" event={"ID":"ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef","Type":"ContainerDied","Data":"8245ba1ef35303ce5087b4ec9f0268e726a450c3c8f0f72042d1655209fffe8b"} Jan 29 16:13:51 crc kubenswrapper[4714]: I0129 16:13:51.226695 4714 scope.go:117] "RemoveContainer" containerID="4d3b6ad01b47ee4415724a7f3a339855145f38f464bbba6afefebc62453be6ce" Jan 29 16:13:51 crc kubenswrapper[4714]: I0129 16:13:51.227171 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dbvgp" Jan 29 16:13:51 crc kubenswrapper[4714]: I0129 16:13:51.249892 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dbvgp"] Jan 29 16:13:51 crc kubenswrapper[4714]: I0129 16:13:51.252325 4714 scope.go:117] "RemoveContainer" containerID="e4723c2a15106e7cb0b50986860a7a558bd1c3ece5b50a3175d026187fce0bd7" Jan 29 16:13:51 crc kubenswrapper[4714]: I0129 16:13:51.254216 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dbvgp"] Jan 29 16:13:51 crc kubenswrapper[4714]: I0129 16:13:51.281735 4714 scope.go:117] "RemoveContainer" containerID="bf193fd80bff7bed1bec1edfb59432d0f18ec27217fda44032cd6a47058aee41" Jan 29 16:13:51 crc kubenswrapper[4714]: I0129 16:13:51.585015 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hbcnj"] Jan 29 16:13:51 crc kubenswrapper[4714]: I0129 16:13:51.585229 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hbcnj" podUID="213a402c-b327-4aa6-9690-22d6da8664a4" containerName="registry-server" containerID="cri-o://2c317c97caab5fe2de4c3ba52a485756258217cc8c0f8aff08364a47758a8598" gracePeriod=2 Jan 29 16:13:52 crc kubenswrapper[4714]: I0129 16:13:52.192424 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef" path="/var/lib/kubelet/pods/ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef/volumes" Jan 29 16:13:53 crc kubenswrapper[4714]: I0129 16:13:53.239272 4714 generic.go:334] "Generic (PLEG): container finished" podID="213a402c-b327-4aa6-9690-22d6da8664a4" containerID="2c317c97caab5fe2de4c3ba52a485756258217cc8c0f8aff08364a47758a8598" exitCode=0 Jan 29 16:13:53 crc kubenswrapper[4714]: I0129 16:13:53.239321 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hbcnj" event={"ID":"213a402c-b327-4aa6-9690-22d6da8664a4","Type":"ContainerDied","Data":"2c317c97caab5fe2de4c3ba52a485756258217cc8c0f8aff08364a47758a8598"} Jan 29 16:13:53 crc kubenswrapper[4714]: I0129 16:13:53.538622 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hbcnj" Jan 29 16:13:53 crc kubenswrapper[4714]: I0129 16:13:53.638806 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/213a402c-b327-4aa6-9690-22d6da8664a4-catalog-content\") pod \"213a402c-b327-4aa6-9690-22d6da8664a4\" (UID: \"213a402c-b327-4aa6-9690-22d6da8664a4\") " Jan 29 16:13:53 crc kubenswrapper[4714]: I0129 16:13:53.638905 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/213a402c-b327-4aa6-9690-22d6da8664a4-utilities\") pod \"213a402c-b327-4aa6-9690-22d6da8664a4\" (UID: \"213a402c-b327-4aa6-9690-22d6da8664a4\") " Jan 29 16:13:53 crc kubenswrapper[4714]: I0129 16:13:53.638972 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pkm4\" (UniqueName: \"kubernetes.io/projected/213a402c-b327-4aa6-9690-22d6da8664a4-kube-api-access-8pkm4\") pod \"213a402c-b327-4aa6-9690-22d6da8664a4\" (UID: \"213a402c-b327-4aa6-9690-22d6da8664a4\") " Jan 29 16:13:53 crc kubenswrapper[4714]: I0129 16:13:53.639881 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/213a402c-b327-4aa6-9690-22d6da8664a4-utilities" (OuterVolumeSpecName: "utilities") pod "213a402c-b327-4aa6-9690-22d6da8664a4" (UID: "213a402c-b327-4aa6-9690-22d6da8664a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:13:53 crc kubenswrapper[4714]: I0129 16:13:53.643965 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/213a402c-b327-4aa6-9690-22d6da8664a4-kube-api-access-8pkm4" (OuterVolumeSpecName: "kube-api-access-8pkm4") pod "213a402c-b327-4aa6-9690-22d6da8664a4" (UID: "213a402c-b327-4aa6-9690-22d6da8664a4"). InnerVolumeSpecName "kube-api-access-8pkm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:13:53 crc kubenswrapper[4714]: I0129 16:13:53.660225 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/213a402c-b327-4aa6-9690-22d6da8664a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "213a402c-b327-4aa6-9690-22d6da8664a4" (UID: "213a402c-b327-4aa6-9690-22d6da8664a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:13:53 crc kubenswrapper[4714]: I0129 16:13:53.741046 4714 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/213a402c-b327-4aa6-9690-22d6da8664a4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:13:53 crc kubenswrapper[4714]: I0129 16:13:53.741070 4714 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/213a402c-b327-4aa6-9690-22d6da8664a4-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:13:53 crc kubenswrapper[4714]: I0129 16:13:53.741080 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pkm4\" (UniqueName: \"kubernetes.io/projected/213a402c-b327-4aa6-9690-22d6da8664a4-kube-api-access-8pkm4\") on node \"crc\" DevicePath \"\"" Jan 29 16:13:53 crc kubenswrapper[4714]: I0129 16:13:53.783296 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kg9qt"] Jan 29 16:13:53 crc kubenswrapper[4714]: I0129 16:13:53.783524 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kg9qt" podUID="ec0eba7e-2ea0-432f-bc57-d87404801abe" containerName="registry-server" containerID="cri-o://280fd3af9e55869b591af38324a50ac2f821ce1422ef0def509f857f44fa4f86" gracePeriod=2 Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.116091 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kg9qt" Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.150621 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec0eba7e-2ea0-432f-bc57-d87404801abe-catalog-content\") pod \"ec0eba7e-2ea0-432f-bc57-d87404801abe\" (UID: \"ec0eba7e-2ea0-432f-bc57-d87404801abe\") " Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.150774 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec0eba7e-2ea0-432f-bc57-d87404801abe-utilities\") pod \"ec0eba7e-2ea0-432f-bc57-d87404801abe\" (UID: \"ec0eba7e-2ea0-432f-bc57-d87404801abe\") " Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.151034 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpzw5\" (UniqueName: \"kubernetes.io/projected/ec0eba7e-2ea0-432f-bc57-d87404801abe-kube-api-access-xpzw5\") pod \"ec0eba7e-2ea0-432f-bc57-d87404801abe\" (UID: \"ec0eba7e-2ea0-432f-bc57-d87404801abe\") " Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.152737 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec0eba7e-2ea0-432f-bc57-d87404801abe-utilities" (OuterVolumeSpecName: "utilities") pod "ec0eba7e-2ea0-432f-bc57-d87404801abe" (UID: "ec0eba7e-2ea0-432f-bc57-d87404801abe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.154415 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec0eba7e-2ea0-432f-bc57-d87404801abe-kube-api-access-xpzw5" (OuterVolumeSpecName: "kube-api-access-xpzw5") pod "ec0eba7e-2ea0-432f-bc57-d87404801abe" (UID: "ec0eba7e-2ea0-432f-bc57-d87404801abe"). InnerVolumeSpecName "kube-api-access-xpzw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.246957 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hbcnj" event={"ID":"213a402c-b327-4aa6-9690-22d6da8664a4","Type":"ContainerDied","Data":"cc206117744230d0505236f4ba4b88035d78daaa5da318d54c519b3dd8b10d4e"} Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.247019 4714 scope.go:117] "RemoveContainer" containerID="2c317c97caab5fe2de4c3ba52a485756258217cc8c0f8aff08364a47758a8598" Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.247028 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hbcnj" Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.251027 4714 generic.go:334] "Generic (PLEG): container finished" podID="ec0eba7e-2ea0-432f-bc57-d87404801abe" containerID="280fd3af9e55869b591af38324a50ac2f821ce1422ef0def509f857f44fa4f86" exitCode=0 Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.251063 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kg9qt" event={"ID":"ec0eba7e-2ea0-432f-bc57-d87404801abe","Type":"ContainerDied","Data":"280fd3af9e55869b591af38324a50ac2f821ce1422ef0def509f857f44fa4f86"} Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.251090 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kg9qt" event={"ID":"ec0eba7e-2ea0-432f-bc57-d87404801abe","Type":"ContainerDied","Data":"0b72081696a87dc8bb1d6a9fb5d18f85ed353808c0ac8086b9bf62a3458b3736"} Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.251093 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kg9qt" Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.252907 4714 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec0eba7e-2ea0-432f-bc57-d87404801abe-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.252924 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpzw5\" (UniqueName: \"kubernetes.io/projected/ec0eba7e-2ea0-432f-bc57-d87404801abe-kube-api-access-xpzw5\") on node \"crc\" DevicePath \"\"" Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.269501 4714 scope.go:117] "RemoveContainer" containerID="30c0abcdea89377220aa9b27f7c4d0c14e091ab28605ec484a71ffefcd4d36ba" Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.273649 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hbcnj"] Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.274711 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hbcnj"] Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.283281 4714 scope.go:117] "RemoveContainer" containerID="53e99ef17ef7a89695643694f24aab5f9e6445925a8b86fd6d021c08cb6c082f" Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.285541 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec0eba7e-2ea0-432f-bc57-d87404801abe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec0eba7e-2ea0-432f-bc57-d87404801abe" (UID: "ec0eba7e-2ea0-432f-bc57-d87404801abe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.302093 4714 scope.go:117] "RemoveContainer" containerID="280fd3af9e55869b591af38324a50ac2f821ce1422ef0def509f857f44fa4f86" Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.316975 4714 scope.go:117] "RemoveContainer" containerID="70619171b98977a9d286bd513ce901776ee376ec0c300ee22dba9b847da868b1" Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.332437 4714 scope.go:117] "RemoveContainer" containerID="52ca09e52af15343909984359236a933ffe1f46ab7ada48929cc2741cd10782f" Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.351509 4714 scope.go:117] "RemoveContainer" containerID="280fd3af9e55869b591af38324a50ac2f821ce1422ef0def509f857f44fa4f86" Jan 29 16:13:54 crc kubenswrapper[4714]: E0129 16:13:54.351747 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"280fd3af9e55869b591af38324a50ac2f821ce1422ef0def509f857f44fa4f86\": container with ID starting with 280fd3af9e55869b591af38324a50ac2f821ce1422ef0def509f857f44fa4f86 not found: ID does not exist" containerID="280fd3af9e55869b591af38324a50ac2f821ce1422ef0def509f857f44fa4f86" Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.351778 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"280fd3af9e55869b591af38324a50ac2f821ce1422ef0def509f857f44fa4f86"} err="failed to get container status \"280fd3af9e55869b591af38324a50ac2f821ce1422ef0def509f857f44fa4f86\": rpc error: code = NotFound desc = could not find container \"280fd3af9e55869b591af38324a50ac2f821ce1422ef0def509f857f44fa4f86\": container with ID starting with 280fd3af9e55869b591af38324a50ac2f821ce1422ef0def509f857f44fa4f86 not found: ID does not exist" Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.351803 4714 scope.go:117] "RemoveContainer" containerID="70619171b98977a9d286bd513ce901776ee376ec0c300ee22dba9b847da868b1" Jan 29 16:13:54 crc kubenswrapper[4714]: E0129 16:13:54.352323 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70619171b98977a9d286bd513ce901776ee376ec0c300ee22dba9b847da868b1\": container with ID starting with 70619171b98977a9d286bd513ce901776ee376ec0c300ee22dba9b847da868b1 not found: ID does not exist" containerID="70619171b98977a9d286bd513ce901776ee376ec0c300ee22dba9b847da868b1" Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.352345 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70619171b98977a9d286bd513ce901776ee376ec0c300ee22dba9b847da868b1"} err="failed to get container status \"70619171b98977a9d286bd513ce901776ee376ec0c300ee22dba9b847da868b1\": rpc error: code = NotFound desc = could not find container \"70619171b98977a9d286bd513ce901776ee376ec0c300ee22dba9b847da868b1\": container with ID starting with 70619171b98977a9d286bd513ce901776ee376ec0c300ee22dba9b847da868b1 not found: ID does not exist" Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.352361 4714 scope.go:117] "RemoveContainer" containerID="52ca09e52af15343909984359236a933ffe1f46ab7ada48929cc2741cd10782f" Jan 29 16:13:54 crc kubenswrapper[4714]: E0129 16:13:54.352563 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52ca09e52af15343909984359236a933ffe1f46ab7ada48929cc2741cd10782f\": container with ID starting with 52ca09e52af15343909984359236a933ffe1f46ab7ada48929cc2741cd10782f not found: ID does not exist" containerID="52ca09e52af15343909984359236a933ffe1f46ab7ada48929cc2741cd10782f" Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.352586 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52ca09e52af15343909984359236a933ffe1f46ab7ada48929cc2741cd10782f"} err="failed to get container status \"52ca09e52af15343909984359236a933ffe1f46ab7ada48929cc2741cd10782f\": rpc error: code = NotFound desc = could not find container \"52ca09e52af15343909984359236a933ffe1f46ab7ada48929cc2741cd10782f\": container with ID starting with 52ca09e52af15343909984359236a933ffe1f46ab7ada48929cc2741cd10782f not found: ID does not exist" Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.353654 4714 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec0eba7e-2ea0-432f-bc57-d87404801abe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.580751 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kg9qt"] Jan 29 16:13:54 crc kubenswrapper[4714]: I0129 16:13:54.586697 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kg9qt"] Jan 29 16:13:55 crc kubenswrapper[4714]: I0129 16:13:55.949005 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h8b4r"] Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.189906 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="213a402c-b327-4aa6-9690-22d6da8664a4" path="/var/lib/kubelet/pods/213a402c-b327-4aa6-9690-22d6da8664a4/volumes" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.190692 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec0eba7e-2ea0-432f-bc57-d87404801abe" path="/var/lib/kubelet/pods/ec0eba7e-2ea0-432f-bc57-d87404801abe/volumes" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.494260 4714 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 16:13:56 crc kubenswrapper[4714]: E0129 16:13:56.495144 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec0eba7e-2ea0-432f-bc57-d87404801abe" containerName="extract-content" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.495167 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec0eba7e-2ea0-432f-bc57-d87404801abe" containerName="extract-content" Jan 29 16:13:56 crc kubenswrapper[4714]: E0129 16:13:56.495191 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="213a402c-b327-4aa6-9690-22d6da8664a4" containerName="extract-content" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.495198 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="213a402c-b327-4aa6-9690-22d6da8664a4" containerName="extract-content" Jan 29 16:13:56 crc kubenswrapper[4714]: E0129 16:13:56.495213 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef" containerName="extract-content" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.495224 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef" containerName="extract-content" Jan 29 16:13:56 crc kubenswrapper[4714]: E0129 16:13:56.495237 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef" containerName="extract-utilities" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.495244 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef" containerName="extract-utilities" Jan 29 16:13:56 crc kubenswrapper[4714]: E0129 16:13:56.495254 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb" containerName="pruner" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.495261 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb" containerName="pruner" Jan 29 16:13:56 crc kubenswrapper[4714]: E0129 16:13:56.495274 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec0eba7e-2ea0-432f-bc57-d87404801abe" containerName="registry-server" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.495283 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec0eba7e-2ea0-432f-bc57-d87404801abe" containerName="registry-server" Jan 29 16:13:56 crc kubenswrapper[4714]: E0129 16:13:56.495293 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef" containerName="registry-server" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.495299 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef" containerName="registry-server" Jan 29 16:13:56 crc kubenswrapper[4714]: E0129 16:13:56.495312 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="213a402c-b327-4aa6-9690-22d6da8664a4" containerName="registry-server" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.495318 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="213a402c-b327-4aa6-9690-22d6da8664a4" containerName="registry-server" Jan 29 16:13:56 crc kubenswrapper[4714]: E0129 16:13:56.495375 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="213a402c-b327-4aa6-9690-22d6da8664a4" containerName="extract-utilities" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.495382 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="213a402c-b327-4aa6-9690-22d6da8664a4" containerName="extract-utilities" Jan 29 16:13:56 crc kubenswrapper[4714]: E0129 16:13:56.495394 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec0eba7e-2ea0-432f-bc57-d87404801abe" containerName="extract-utilities" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.495400 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec0eba7e-2ea0-432f-bc57-d87404801abe" containerName="extract-utilities" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.496217 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae1263ae-fc9d-4d73-ac5d-65e9e4dacfef" containerName="registry-server" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.496239 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec0eba7e-2ea0-432f-bc57-d87404801abe" containerName="registry-server" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.496250 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="213a402c-b327-4aa6-9690-22d6da8664a4" containerName="registry-server" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.496268 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="c33e9e66-b87c-4a7a-8c7b-8fd44b4bd3eb" containerName="pruner" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.497337 4714 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.497731 4714 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.497798 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f" gracePeriod=15 Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.498047 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.498223 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79" gracePeriod=15 Jan 29 16:13:56 crc kubenswrapper[4714]: E0129 16:13:56.498346 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.498916 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 16:13:56 crc kubenswrapper[4714]: E0129 16:13:56.498948 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.498956 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 16:13:56 crc kubenswrapper[4714]: E0129 16:13:56.498988 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.499010 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:13:56 crc kubenswrapper[4714]: E0129 16:13:56.499026 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.499032 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 16:13:56 crc kubenswrapper[4714]: E0129 16:13:56.499063 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.499070 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 16:13:56 crc kubenswrapper[4714]: E0129 16:13:56.499084 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.499090 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.499286 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.499300 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.499307 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.499319 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.499347 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.499355 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:13:56 crc kubenswrapper[4714]: E0129 16:13:56.499584 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.499593 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.498381 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa" gracePeriod=15 Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.498416 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2" gracePeriod=15 Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.498480 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6" gracePeriod=15 Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.503748 4714 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.589658 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.589718 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.589745 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.589760 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.589777 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.589795 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.590019 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.590655 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.692145 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.692232 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.692269 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.692296 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.692323 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.692346 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.692366 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.692446 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.692535 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.692586 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.692613 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.692639 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.692668 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.692713 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.692765 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:13:56 crc kubenswrapper[4714]: I0129 16:13:56.692802 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:13:57 crc kubenswrapper[4714]: I0129 16:13:57.112445 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xtr82" Jan 29 16:13:57 crc kubenswrapper[4714]: I0129 16:13:57.113469 4714 status_manager.go:851] "Failed to get status for pod" podUID="11a30de8-b234-47b4-8fd0-44f0c428be78" pod="openshift-marketplace/community-operators-xtr82" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xtr82\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:13:57 crc kubenswrapper[4714]: I0129 16:13:57.155127 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xtr82" Jan 29 16:13:57 crc kubenswrapper[4714]: I0129 16:13:57.155566 4714 status_manager.go:851] "Failed to get status for pod" podUID="11a30de8-b234-47b4-8fd0-44f0c428be78" pod="openshift-marketplace/community-operators-xtr82" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xtr82\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:13:57 crc kubenswrapper[4714]: I0129 16:13:57.269577 4714 generic.go:334] "Generic (PLEG): container finished" podID="9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24" containerID="d7f3c5ef1cf90e64e41a56b19a6b215a0e1a7265f9a460a77da807a32640c2c8" exitCode=0 Jan 29 16:13:57 crc kubenswrapper[4714]: I0129 16:13:57.269679 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24","Type":"ContainerDied","Data":"d7f3c5ef1cf90e64e41a56b19a6b215a0e1a7265f9a460a77da807a32640c2c8"} Jan 29 16:13:57 crc kubenswrapper[4714]: I0129 16:13:57.270363 4714 status_manager.go:851] "Failed to get status for pod" podUID="11a30de8-b234-47b4-8fd0-44f0c428be78" pod="openshift-marketplace/community-operators-xtr82" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xtr82\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:13:57 crc kubenswrapper[4714]: I0129 16:13:57.270733 4714 status_manager.go:851] "Failed to get status for pod" podUID="9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:13:57 crc kubenswrapper[4714]: I0129 16:13:57.272727 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 16:13:57 crc kubenswrapper[4714]: I0129 16:13:57.274069 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 16:13:57 crc kubenswrapper[4714]: I0129 16:13:57.275002 4714 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79" exitCode=0 Jan 29 16:13:57 crc kubenswrapper[4714]: I0129 16:13:57.275062 4714 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2" exitCode=0 Jan 29 16:13:57 crc kubenswrapper[4714]: I0129 16:13:57.275086 4714 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa" exitCode=0 Jan 29 16:13:57 crc kubenswrapper[4714]: I0129 16:13:57.275107 4714 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6" exitCode=2 Jan 29 16:13:57 crc kubenswrapper[4714]: I0129 16:13:57.275107 4714 scope.go:117] "RemoveContainer" containerID="6e458517de6b8ed3eb1e3e2b912a6d2134d1f4d5cada6affcb056fbe35defce8" Jan 29 16:13:57 crc kubenswrapper[4714]: I0129 16:13:57.569419 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6bjgq" Jan 29 16:13:57 crc kubenswrapper[4714]: I0129 16:13:57.570495 4714 status_manager.go:851] "Failed to get status for pod" podUID="11a30de8-b234-47b4-8fd0-44f0c428be78" pod="openshift-marketplace/community-operators-xtr82" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xtr82\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:13:57 crc kubenswrapper[4714]: I0129 16:13:57.570653 4714 status_manager.go:851] "Failed to get status for pod" podUID="98a35d03-ef3b-4341-9866-56d12a28aee3" pod="openshift-marketplace/community-operators-6bjgq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6bjgq\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:13:57 crc kubenswrapper[4714]: I0129 16:13:57.570918 4714 status_manager.go:851] "Failed to get status for pod" podUID="9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.283825 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.573040 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.573523 4714 status_manager.go:851] "Failed to get status for pod" podUID="11a30de8-b234-47b4-8fd0-44f0c428be78" pod="openshift-marketplace/community-operators-xtr82" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xtr82\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.573822 4714 status_manager.go:851] "Failed to get status for pod" podUID="9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.578735 4714 status_manager.go:851] "Failed to get status for pod" podUID="98a35d03-ef3b-4341-9866-56d12a28aee3" pod="openshift-marketplace/community-operators-6bjgq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6bjgq\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.653087 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24-kube-api-access\") pod \"9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24\" (UID: \"9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24\") " Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.653526 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24-var-lock\") pod \"9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24\" (UID: \"9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24\") " Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.653573 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24-kubelet-dir\") pod \"9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24\" (UID: \"9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24\") " Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.653637 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24-var-lock" (OuterVolumeSpecName: "var-lock") pod "9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24" (UID: "9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.653745 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24" (UID: "9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.654008 4714 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.654024 4714 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.664265 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24" (UID: "9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.755872 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.853787 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.854431 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.854892 4714 status_manager.go:851] "Failed to get status for pod" podUID="11a30de8-b234-47b4-8fd0-44f0c428be78" pod="openshift-marketplace/community-operators-xtr82" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xtr82\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.855086 4714 status_manager.go:851] "Failed to get status for pod" podUID="98a35d03-ef3b-4341-9866-56d12a28aee3" pod="openshift-marketplace/community-operators-6bjgq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6bjgq\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.855359 4714 status_manager.go:851] "Failed to get status for pod" podUID="9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.855811 4714 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.957970 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.958035 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.958091 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.958163 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.958163 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.958264 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.958577 4714 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.958602 4714 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:13:58 crc kubenswrapper[4714]: I0129 16:13:58.958614 4714 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:13:59 crc kubenswrapper[4714]: I0129 16:13:59.291538 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24","Type":"ContainerDied","Data":"c9979ec9683e16f0b3aadc58f285988aa9a20624d0e8ae1b706d16b4bb36c291"} Jan 29 16:13:59 crc kubenswrapper[4714]: I0129 16:13:59.291593 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9979ec9683e16f0b3aadc58f285988aa9a20624d0e8ae1b706d16b4bb36c291" Jan 29 16:13:59 crc kubenswrapper[4714]: I0129 16:13:59.291654 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:13:59 crc kubenswrapper[4714]: I0129 16:13:59.297222 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 16:13:59 crc kubenswrapper[4714]: I0129 16:13:59.300423 4714 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f" exitCode=0 Jan 29 16:13:59 crc kubenswrapper[4714]: I0129 16:13:59.300481 4714 scope.go:117] "RemoveContainer" containerID="422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79" Jan 29 16:13:59 crc kubenswrapper[4714]: I0129 16:13:59.300600 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:13:59 crc kubenswrapper[4714]: I0129 16:13:59.307434 4714 status_manager.go:851] "Failed to get status for pod" podUID="11a30de8-b234-47b4-8fd0-44f0c428be78" pod="openshift-marketplace/community-operators-xtr82" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xtr82\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:13:59 crc kubenswrapper[4714]: I0129 16:13:59.308068 4714 status_manager.go:851] "Failed to get status for pod" podUID="98a35d03-ef3b-4341-9866-56d12a28aee3" pod="openshift-marketplace/community-operators-6bjgq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6bjgq\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:13:59 crc kubenswrapper[4714]: I0129 16:13:59.308392 4714 status_manager.go:851] "Failed to get status for pod" podUID="9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:13:59 crc kubenswrapper[4714]: I0129 16:13:59.308857 4714 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:13:59 crc kubenswrapper[4714]: I0129 16:13:59.313784 4714 status_manager.go:851] "Failed to get status for pod" podUID="11a30de8-b234-47b4-8fd0-44f0c428be78" pod="openshift-marketplace/community-operators-xtr82" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xtr82\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:13:59 crc kubenswrapper[4714]: I0129 16:13:59.314209 4714 status_manager.go:851] "Failed to get status for pod" podUID="98a35d03-ef3b-4341-9866-56d12a28aee3" pod="openshift-marketplace/community-operators-6bjgq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6bjgq\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:13:59 crc kubenswrapper[4714]: I0129 16:13:59.314480 4714 status_manager.go:851] "Failed to get status for pod" podUID="9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:13:59 crc kubenswrapper[4714]: I0129 16:13:59.314824 4714 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:13:59 crc kubenswrapper[4714]: I0129 16:13:59.315535 4714 scope.go:117] "RemoveContainer" containerID="5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2" Jan 29 16:13:59 crc kubenswrapper[4714]: I0129 16:13:59.325283 4714 scope.go:117] "RemoveContainer" containerID="b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa" Jan 29 16:13:59 crc kubenswrapper[4714]: I0129 16:13:59.336027 4714 scope.go:117] "RemoveContainer" containerID="4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6" Jan 29 16:13:59 crc kubenswrapper[4714]: I0129 16:13:59.351165 4714 scope.go:117] "RemoveContainer" containerID="a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f" Jan 29 16:13:59 crc kubenswrapper[4714]: I0129 16:13:59.365491 4714 scope.go:117] "RemoveContainer" containerID="3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c" Jan 29 16:14:00 crc kubenswrapper[4714]: I0129 16:14:00.190907 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 29 16:14:00 crc kubenswrapper[4714]: I0129 16:14:00.359925 4714 scope.go:117] "RemoveContainer" containerID="422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79" Jan 29 16:14:00 crc kubenswrapper[4714]: E0129 16:14:00.360621 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\": container with ID starting with 422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79 not found: ID does not exist" containerID="422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79" Jan 29 16:14:00 crc kubenswrapper[4714]: I0129 16:14:00.360666 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79"} err="failed to get container status \"422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\": rpc error: code = NotFound desc = could not find container \"422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79\": container with ID starting with 422e671f983882856957a72a899cd1e37d99ed4214cfd94bbf5b94933859ae79 not found: ID does not exist" Jan 29 16:14:00 crc kubenswrapper[4714]: I0129 16:14:00.360696 4714 scope.go:117] "RemoveContainer" containerID="5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2" Jan 29 16:14:00 crc kubenswrapper[4714]: E0129 16:14:00.361096 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\": container with ID starting with 5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2 not found: ID does not exist" containerID="5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2" Jan 29 16:14:00 crc kubenswrapper[4714]: I0129 16:14:00.361131 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2"} err="failed to get container status \"5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\": rpc error: code = NotFound desc = could not find container \"5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2\": container with ID starting with 5054d435e2cfce3893996ab2bd3746f95318408942e3239d3a184da25d540ba2 not found: ID does not exist" Jan 29 16:14:00 crc kubenswrapper[4714]: I0129 16:14:00.361158 4714 scope.go:117] "RemoveContainer" containerID="b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa" Jan 29 16:14:00 crc kubenswrapper[4714]: E0129 16:14:00.361486 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\": container with ID starting with b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa not found: ID does not exist" containerID="b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa" Jan 29 16:14:00 crc kubenswrapper[4714]: I0129 16:14:00.361526 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa"} err="failed to get container status \"b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\": rpc error: code = NotFound desc = could not find container \"b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa\": container with ID starting with b4b8556c6768e3aa3dba661bc9e6bcf3b159139eada076ff3419e54cbe0ca1fa not found: ID does not exist" Jan 29 16:14:00 crc kubenswrapper[4714]: I0129 16:14:00.361549 4714 scope.go:117] "RemoveContainer" containerID="4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6" Jan 29 16:14:00 crc kubenswrapper[4714]: E0129 16:14:00.361869 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\": container with ID starting with 4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6 not found: ID does not exist" containerID="4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6" Jan 29 16:14:00 crc kubenswrapper[4714]: I0129 16:14:00.361918 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6"} err="failed to get container status \"4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\": rpc error: code = NotFound desc = could not find container \"4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6\": container with ID starting with 4ab2f0e3ab0fcab15e30d6b29bb14662bcbb0c9357e4df00c15e43e3970193c6 not found: ID does not exist" Jan 29 16:14:00 crc kubenswrapper[4714]: I0129 16:14:00.361994 4714 scope.go:117] "RemoveContainer" containerID="a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f" Jan 29 16:14:00 crc kubenswrapper[4714]: E0129 16:14:00.364786 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\": container with ID starting with a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f not found: ID does not exist" containerID="a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f" Jan 29 16:14:00 crc kubenswrapper[4714]: I0129 16:14:00.364828 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f"} err="failed to get container status \"a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\": rpc error: code = NotFound desc = could not find container \"a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f\": container with ID starting with a754474ea4ff3c800aa715e042720449b861d7a7911e34ec627aa9a94807f30f not found: ID does not exist" Jan 29 16:14:00 crc kubenswrapper[4714]: I0129 16:14:00.364843 4714 scope.go:117] "RemoveContainer" containerID="3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c" Jan 29 16:14:00 crc kubenswrapper[4714]: E0129 16:14:00.365294 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\": container with ID starting with 3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c not found: ID does not exist" containerID="3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c" Jan 29 16:14:00 crc kubenswrapper[4714]: I0129 16:14:00.365329 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c"} err="failed to get container status \"3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\": rpc error: code = NotFound desc = could not find container \"3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c\": container with ID starting with 3af6d31edf201152cb07bcdbe09550306376ea701a5ab9d7755a552218c57c8c not found: ID does not exist" Jan 29 16:14:01 crc kubenswrapper[4714]: E0129 16:14:01.538677 4714 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.46:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:14:01 crc kubenswrapper[4714]: I0129 16:14:01.539173 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:14:01 crc kubenswrapper[4714]: W0129 16:14:01.558496 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-c2827e545ce1c2813ef6994a00eaf13c969a902919f03f1d776c0660aff8ea0d WatchSource:0}: Error finding container c2827e545ce1c2813ef6994a00eaf13c969a902919f03f1d776c0660aff8ea0d: Status 404 returned error can't find the container with id c2827e545ce1c2813ef6994a00eaf13c969a902919f03f1d776c0660aff8ea0d Jan 29 16:14:01 crc kubenswrapper[4714]: E0129 16:14:01.560644 4714 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.46:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f3fb8cf10316f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 16:14:01.560371567 +0000 UTC m=+248.080872697,LastTimestamp:2026-01-29 16:14:01.560371567 +0000 UTC m=+248.080872697,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 16:14:02 crc kubenswrapper[4714]: I0129 16:14:02.323534 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"bab9b90f5d182c4ebe5eb811fac6ec3fd2ecd7c7c38b72ecfd4df4e4a0e90141"} Jan 29 16:14:02 crc kubenswrapper[4714]: I0129 16:14:02.323592 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c2827e545ce1c2813ef6994a00eaf13c969a902919f03f1d776c0660aff8ea0d"} Jan 29 16:14:02 crc kubenswrapper[4714]: E0129 16:14:02.324233 4714 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.46:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:14:02 crc kubenswrapper[4714]: I0129 16:14:02.324484 4714 status_manager.go:851] "Failed to get status for pod" podUID="11a30de8-b234-47b4-8fd0-44f0c428be78" pod="openshift-marketplace/community-operators-xtr82" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xtr82\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:14:02 crc kubenswrapper[4714]: I0129 16:14:02.324740 4714 status_manager.go:851] "Failed to get status for pod" podUID="98a35d03-ef3b-4341-9866-56d12a28aee3" pod="openshift-marketplace/community-operators-6bjgq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6bjgq\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:14:02 crc kubenswrapper[4714]: I0129 16:14:02.325076 4714 status_manager.go:851] "Failed to get status for pod" podUID="9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:14:04 crc kubenswrapper[4714]: I0129 16:14:04.189364 4714 status_manager.go:851] "Failed to get status for pod" podUID="11a30de8-b234-47b4-8fd0-44f0c428be78" pod="openshift-marketplace/community-operators-xtr82" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xtr82\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:14:04 crc kubenswrapper[4714]: I0129 16:14:04.190301 4714 status_manager.go:851] "Failed to get status for pod" podUID="9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:14:04 crc kubenswrapper[4714]: I0129 16:14:04.190712 4714 status_manager.go:851] "Failed to get status for pod" podUID="98a35d03-ef3b-4341-9866-56d12a28aee3" pod="openshift-marketplace/community-operators-6bjgq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6bjgq\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:14:04 crc kubenswrapper[4714]: E0129 16:14:04.973787 4714 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:14:04 crc kubenswrapper[4714]: E0129 16:14:04.974775 4714 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:14:04 crc kubenswrapper[4714]: E0129 16:14:04.975471 4714 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:14:04 crc kubenswrapper[4714]: E0129 16:14:04.975904 4714 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:14:04 crc kubenswrapper[4714]: E0129 16:14:04.976493 4714 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:14:04 crc kubenswrapper[4714]: I0129 16:14:04.976543 4714 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 29 16:14:04 crc kubenswrapper[4714]: E0129 16:14:04.976846 4714 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="200ms" Jan 29 16:14:05 crc kubenswrapper[4714]: E0129 16:14:05.178586 4714 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="400ms" Jan 29 16:14:05 crc kubenswrapper[4714]: E0129 16:14:05.579367 4714 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="800ms" Jan 29 16:14:06 crc kubenswrapper[4714]: E0129 16:14:06.380236 4714 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="1.6s" Jan 29 16:14:07 crc kubenswrapper[4714]: E0129 16:14:07.981248 4714 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="3.2s" Jan 29 16:14:08 crc kubenswrapper[4714]: I0129 16:14:08.183621 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:14:08 crc kubenswrapper[4714]: I0129 16:14:08.184560 4714 status_manager.go:851] "Failed to get status for pod" podUID="11a30de8-b234-47b4-8fd0-44f0c428be78" pod="openshift-marketplace/community-operators-xtr82" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xtr82\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:14:08 crc kubenswrapper[4714]: I0129 16:14:08.185323 4714 status_manager.go:851] "Failed to get status for pod" podUID="98a35d03-ef3b-4341-9866-56d12a28aee3" pod="openshift-marketplace/community-operators-6bjgq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6bjgq\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:14:08 crc kubenswrapper[4714]: I0129 16:14:08.185810 4714 status_manager.go:851] "Failed to get status for pod" podUID="9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:14:08 crc kubenswrapper[4714]: I0129 16:14:08.208923 4714 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="810f3b92-c43d-41fd-8a5f-0f926ed63e50" Jan 29 16:14:08 crc kubenswrapper[4714]: I0129 16:14:08.208990 4714 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="810f3b92-c43d-41fd-8a5f-0f926ed63e50" Jan 29 16:14:08 crc kubenswrapper[4714]: E0129 16:14:08.209488 4714 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:14:08 crc kubenswrapper[4714]: I0129 16:14:08.210073 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:14:08 crc kubenswrapper[4714]: I0129 16:14:08.370844 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e76183f9859ceea5d6ba47762effc0bc5f31d2e9dcea477167d58699be798bdd"} Jan 29 16:14:09 crc kubenswrapper[4714]: I0129 16:14:09.378420 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 16:14:09 crc kubenswrapper[4714]: I0129 16:14:09.378457 4714 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835" exitCode=1 Jan 29 16:14:09 crc kubenswrapper[4714]: I0129 16:14:09.378502 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835"} Jan 29 16:14:09 crc kubenswrapper[4714]: I0129 16:14:09.378991 4714 scope.go:117] "RemoveContainer" containerID="d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835" Jan 29 16:14:09 crc kubenswrapper[4714]: I0129 16:14:09.381267 4714 status_manager.go:851] "Failed to get status for pod" podUID="11a30de8-b234-47b4-8fd0-44f0c428be78" pod="openshift-marketplace/community-operators-xtr82" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xtr82\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:14:09 crc kubenswrapper[4714]: I0129 16:14:09.381663 4714 status_manager.go:851] "Failed to get status for pod" podUID="98a35d03-ef3b-4341-9866-56d12a28aee3" pod="openshift-marketplace/community-operators-6bjgq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6bjgq\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:14:09 crc kubenswrapper[4714]: I0129 16:14:09.382206 4714 status_manager.go:851] "Failed to get status for pod" podUID="9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:14:09 crc kubenswrapper[4714]: I0129 16:14:09.382491 4714 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="557886b8390c654504d86188f97a1e0330661ae0a5a81431ae900193973c2ff7" exitCode=0 Jan 29 16:14:09 crc kubenswrapper[4714]: I0129 16:14:09.382515 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"557886b8390c654504d86188f97a1e0330661ae0a5a81431ae900193973c2ff7"} Jan 29 16:14:09 crc kubenswrapper[4714]: I0129 16:14:09.382734 4714 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:14:09 crc kubenswrapper[4714]: I0129 16:14:09.382851 4714 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="810f3b92-c43d-41fd-8a5f-0f926ed63e50" Jan 29 16:14:09 crc kubenswrapper[4714]: I0129 16:14:09.382883 4714 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="810f3b92-c43d-41fd-8a5f-0f926ed63e50" Jan 29 16:14:09 crc kubenswrapper[4714]: E0129 16:14:09.383282 4714 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:14:09 crc kubenswrapper[4714]: I0129 16:14:09.383513 4714 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:14:09 crc kubenswrapper[4714]: I0129 16:14:09.383914 4714 status_manager.go:851] "Failed to get status for pod" podUID="11a30de8-b234-47b4-8fd0-44f0c428be78" pod="openshift-marketplace/community-operators-xtr82" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-xtr82\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:14:09 crc kubenswrapper[4714]: I0129 16:14:09.384357 4714 status_manager.go:851] "Failed to get status for pod" podUID="98a35d03-ef3b-4341-9866-56d12a28aee3" pod="openshift-marketplace/community-operators-6bjgq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6bjgq\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:14:09 crc kubenswrapper[4714]: I0129 16:14:09.385120 4714 status_manager.go:851] "Failed to get status for pod" podUID="9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Jan 29 16:14:10 crc kubenswrapper[4714]: I0129 16:14:10.177849 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:14:10 crc kubenswrapper[4714]: I0129 16:14:10.404612 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 16:14:10 crc kubenswrapper[4714]: I0129 16:14:10.404726 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e41d24707f09801e897cb0baf4db02b0644cf5ce6dabf14163675df4678cd1d1"} Jan 29 16:14:10 crc kubenswrapper[4714]: I0129 16:14:10.409670 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"62502093aedf338472b1261dc436b80ad6d8f7171817093db5cd839155c867a6"} Jan 29 16:14:10 crc kubenswrapper[4714]: I0129 16:14:10.409728 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4c7978fe28e5ff4501e1852333df37847b013416c0b696c89be556464d8091e5"} Jan 29 16:14:10 crc kubenswrapper[4714]: I0129 16:14:10.409747 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"47e1a9368e1ef86883e3c9c9f4b17768154fc3db5a8ae34a582625e9c46c2141"} Jan 29 16:14:11 crc kubenswrapper[4714]: I0129 16:14:11.416402 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"79e61e663a3f3661a37379cab961763d4262bf19a479e965afdef0e30b930772"} Jan 29 16:14:11 crc kubenswrapper[4714]: I0129 16:14:11.416451 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"908aa0992c2ec1ba47bc8f6aacd67caae83f412a5dc37bca91d4fa7c0135ee38"} Jan 29 16:14:11 crc kubenswrapper[4714]: I0129 16:14:11.416608 4714 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="810f3b92-c43d-41fd-8a5f-0f926ed63e50" Jan 29 16:14:11 crc kubenswrapper[4714]: I0129 16:14:11.416618 4714 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="810f3b92-c43d-41fd-8a5f-0f926ed63e50" Jan 29 16:14:13 crc kubenswrapper[4714]: I0129 16:14:13.210803 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:14:13 crc kubenswrapper[4714]: I0129 16:14:13.211257 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:14:13 crc kubenswrapper[4714]: I0129 16:14:13.220839 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:14:16 crc kubenswrapper[4714]: I0129 16:14:16.426216 4714 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:14:16 crc kubenswrapper[4714]: I0129 16:14:16.456919 4714 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"810f3b92-c43d-41fd-8a5f-0f926ed63e50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:14:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:14:09Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:14:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:14:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e1a9368e1ef86883e3c9c9f4b17768154fc3db5a8ae34a582625e9c46c2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:14:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62502093aedf338472b1261dc436b80ad6d8f7171817093db5cd839155c867a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:14:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7978fe28e5ff4501e1852333df37847b013416c0b696c89be556464d8091e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:14:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e61e663a3f3661a37379cab961763d4262bf19a479e965afdef0e30b930772\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:14:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://908aa0992c2ec1ba47bc8f6aacd67caae83f412a5dc37bca91d4fa7c0135ee38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:14:10Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://557886b8390c654504d86188f97a1e0330661ae0a5a81431ae900193973c2ff7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://557886b8390c654504d86188f97a1e0330661ae0a5a81431ae900193973c2ff7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:14:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:14:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}]}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Pod \"kube-apiserver-crc\" is invalid: metadata.uid: Invalid value: \"810f3b92-c43d-41fd-8a5f-0f926ed63e50\": field is immutable" Jan 29 16:14:17 crc kubenswrapper[4714]: I0129 16:14:17.450378 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:14:17 crc kubenswrapper[4714]: I0129 16:14:17.450510 4714 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="810f3b92-c43d-41fd-8a5f-0f926ed63e50" Jan 29 16:14:17 crc kubenswrapper[4714]: I0129 16:14:17.450547 4714 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="810f3b92-c43d-41fd-8a5f-0f926ed63e50" Jan 29 16:14:17 crc kubenswrapper[4714]: I0129 16:14:17.458689 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:14:17 crc kubenswrapper[4714]: I0129 16:14:17.464460 4714 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d341acb7-4a33-42d9-b4bb-de8e3c376306" Jan 29 16:14:18 crc kubenswrapper[4714]: I0129 16:14:18.469017 4714 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="810f3b92-c43d-41fd-8a5f-0f926ed63e50" Jan 29 16:14:18 crc kubenswrapper[4714]: I0129 16:14:18.469339 4714 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="810f3b92-c43d-41fd-8a5f-0f926ed63e50" Jan 29 16:14:18 crc kubenswrapper[4714]: I0129 16:14:18.473798 4714 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d341acb7-4a33-42d9-b4bb-de8e3c376306" Jan 29 16:14:18 crc kubenswrapper[4714]: I0129 16:14:18.513197 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:14:18 crc kubenswrapper[4714]: I0129 16:14:18.513767 4714 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 16:14:18 crc kubenswrapper[4714]: I0129 16:14:18.513918 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 16:14:19 crc kubenswrapper[4714]: I0129 16:14:19.481019 4714 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="810f3b92-c43d-41fd-8a5f-0f926ed63e50" Jan 29 16:14:19 crc kubenswrapper[4714]: I0129 16:14:19.481068 4714 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="810f3b92-c43d-41fd-8a5f-0f926ed63e50" Jan 29 16:14:19 crc kubenswrapper[4714]: I0129 16:14:19.485447 4714 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d341acb7-4a33-42d9-b4bb-de8e3c376306" Jan 29 16:14:20 crc kubenswrapper[4714]: I0129 16:14:20.177669 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:14:20 crc kubenswrapper[4714]: I0129 16:14:20.978323 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" podUID="832097a5-4691-42b6-99cc-38679071d5ee" containerName="oauth-openshift" containerID="cri-o://9bc1cd1aee2de10059a78dc94ebfc1cb6a64c8e3be39806bf6e6ab107f47abff" gracePeriod=15 Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.452178 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.494052 4714 generic.go:334] "Generic (PLEG): container finished" podID="832097a5-4691-42b6-99cc-38679071d5ee" containerID="9bc1cd1aee2de10059a78dc94ebfc1cb6a64c8e3be39806bf6e6ab107f47abff" exitCode=0 Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.494102 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" event={"ID":"832097a5-4691-42b6-99cc-38679071d5ee","Type":"ContainerDied","Data":"9bc1cd1aee2de10059a78dc94ebfc1cb6a64c8e3be39806bf6e6ab107f47abff"} Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.494130 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" event={"ID":"832097a5-4691-42b6-99cc-38679071d5ee","Type":"ContainerDied","Data":"3a5ee9422c0e8f2bda4f13b1ec7a93ce78a161df42fc1dddfe6f8337aed30775"} Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.494133 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-h8b4r" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.494147 4714 scope.go:117] "RemoveContainer" containerID="9bc1cd1aee2de10059a78dc94ebfc1cb6a64c8e3be39806bf6e6ab107f47abff" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.512834 4714 scope.go:117] "RemoveContainer" containerID="9bc1cd1aee2de10059a78dc94ebfc1cb6a64c8e3be39806bf6e6ab107f47abff" Jan 29 16:14:21 crc kubenswrapper[4714]: E0129 16:14:21.513296 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bc1cd1aee2de10059a78dc94ebfc1cb6a64c8e3be39806bf6e6ab107f47abff\": container with ID starting with 9bc1cd1aee2de10059a78dc94ebfc1cb6a64c8e3be39806bf6e6ab107f47abff not found: ID does not exist" containerID="9bc1cd1aee2de10059a78dc94ebfc1cb6a64c8e3be39806bf6e6ab107f47abff" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.513343 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bc1cd1aee2de10059a78dc94ebfc1cb6a64c8e3be39806bf6e6ab107f47abff"} err="failed to get container status \"9bc1cd1aee2de10059a78dc94ebfc1cb6a64c8e3be39806bf6e6ab107f47abff\": rpc error: code = NotFound desc = could not find container \"9bc1cd1aee2de10059a78dc94ebfc1cb6a64c8e3be39806bf6e6ab107f47abff\": container with ID starting with 9bc1cd1aee2de10059a78dc94ebfc1cb6a64c8e3be39806bf6e6ab107f47abff not found: ID does not exist" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.604409 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-cliconfig\") pod \"832097a5-4691-42b6-99cc-38679071d5ee\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.604473 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-session\") pod \"832097a5-4691-42b6-99cc-38679071d5ee\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.604534 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-user-template-login\") pod \"832097a5-4691-42b6-99cc-38679071d5ee\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.604572 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8nmp\" (UniqueName: \"kubernetes.io/projected/832097a5-4691-42b6-99cc-38679071d5ee-kube-api-access-l8nmp\") pod \"832097a5-4691-42b6-99cc-38679071d5ee\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.604616 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-ocp-branding-template\") pod \"832097a5-4691-42b6-99cc-38679071d5ee\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.604654 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-router-certs\") pod \"832097a5-4691-42b6-99cc-38679071d5ee\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.604692 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-user-template-error\") pod \"832097a5-4691-42b6-99cc-38679071d5ee\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.604735 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/832097a5-4691-42b6-99cc-38679071d5ee-audit-dir\") pod \"832097a5-4691-42b6-99cc-38679071d5ee\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.604782 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-serving-cert\") pod \"832097a5-4691-42b6-99cc-38679071d5ee\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.604813 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-user-idp-0-file-data\") pod \"832097a5-4691-42b6-99cc-38679071d5ee\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.604863 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/832097a5-4691-42b6-99cc-38679071d5ee-audit-policies\") pod \"832097a5-4691-42b6-99cc-38679071d5ee\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.604903 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-service-ca\") pod \"832097a5-4691-42b6-99cc-38679071d5ee\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.604926 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/832097a5-4691-42b6-99cc-38679071d5ee-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "832097a5-4691-42b6-99cc-38679071d5ee" (UID: "832097a5-4691-42b6-99cc-38679071d5ee"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.604961 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-trusted-ca-bundle\") pod \"832097a5-4691-42b6-99cc-38679071d5ee\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.605061 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-user-template-provider-selection\") pod \"832097a5-4691-42b6-99cc-38679071d5ee\" (UID: \"832097a5-4691-42b6-99cc-38679071d5ee\") " Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.605390 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "832097a5-4691-42b6-99cc-38679071d5ee" (UID: "832097a5-4691-42b6-99cc-38679071d5ee"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.605582 4714 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/832097a5-4691-42b6-99cc-38679071d5ee-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.605612 4714 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.605679 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "832097a5-4691-42b6-99cc-38679071d5ee" (UID: "832097a5-4691-42b6-99cc-38679071d5ee"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.606641 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/832097a5-4691-42b6-99cc-38679071d5ee-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "832097a5-4691-42b6-99cc-38679071d5ee" (UID: "832097a5-4691-42b6-99cc-38679071d5ee"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.606655 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "832097a5-4691-42b6-99cc-38679071d5ee" (UID: "832097a5-4691-42b6-99cc-38679071d5ee"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.610884 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "832097a5-4691-42b6-99cc-38679071d5ee" (UID: "832097a5-4691-42b6-99cc-38679071d5ee"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.610988 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/832097a5-4691-42b6-99cc-38679071d5ee-kube-api-access-l8nmp" (OuterVolumeSpecName: "kube-api-access-l8nmp") pod "832097a5-4691-42b6-99cc-38679071d5ee" (UID: "832097a5-4691-42b6-99cc-38679071d5ee"). InnerVolumeSpecName "kube-api-access-l8nmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.611409 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "832097a5-4691-42b6-99cc-38679071d5ee" (UID: "832097a5-4691-42b6-99cc-38679071d5ee"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.611720 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "832097a5-4691-42b6-99cc-38679071d5ee" (UID: "832097a5-4691-42b6-99cc-38679071d5ee"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.612022 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "832097a5-4691-42b6-99cc-38679071d5ee" (UID: "832097a5-4691-42b6-99cc-38679071d5ee"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.612216 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "832097a5-4691-42b6-99cc-38679071d5ee" (UID: "832097a5-4691-42b6-99cc-38679071d5ee"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.616236 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "832097a5-4691-42b6-99cc-38679071d5ee" (UID: "832097a5-4691-42b6-99cc-38679071d5ee"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.617395 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "832097a5-4691-42b6-99cc-38679071d5ee" (UID: "832097a5-4691-42b6-99cc-38679071d5ee"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.617526 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "832097a5-4691-42b6-99cc-38679071d5ee" (UID: "832097a5-4691-42b6-99cc-38679071d5ee"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.706671 4714 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.706728 4714 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.706752 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8nmp\" (UniqueName: \"kubernetes.io/projected/832097a5-4691-42b6-99cc-38679071d5ee-kube-api-access-l8nmp\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.706773 4714 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.706792 4714 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.706811 4714 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.706828 4714 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.706846 4714 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.706865 4714 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/832097a5-4691-42b6-99cc-38679071d5ee-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.706883 4714 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.706900 4714 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:21 crc kubenswrapper[4714]: I0129 16:14:21.706921 4714 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/832097a5-4691-42b6-99cc-38679071d5ee-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:25 crc kubenswrapper[4714]: I0129 16:14:25.924622 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 16:14:26 crc kubenswrapper[4714]: I0129 16:14:26.478571 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 16:14:26 crc kubenswrapper[4714]: I0129 16:14:26.879705 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 16:14:27 crc kubenswrapper[4714]: I0129 16:14:27.360789 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 16:14:27 crc kubenswrapper[4714]: I0129 16:14:27.472355 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 16:14:27 crc kubenswrapper[4714]: I0129 16:14:27.565722 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 16:14:27 crc kubenswrapper[4714]: I0129 16:14:27.649273 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 16:14:27 crc kubenswrapper[4714]: I0129 16:14:27.751321 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 16:14:27 crc kubenswrapper[4714]: I0129 16:14:27.802050 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 16:14:27 crc kubenswrapper[4714]: I0129 16:14:27.869251 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 16:14:28 crc kubenswrapper[4714]: I0129 16:14:28.222259 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 16:14:28 crc kubenswrapper[4714]: I0129 16:14:28.513413 4714 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 16:14:28 crc kubenswrapper[4714]: I0129 16:14:28.513488 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 16:14:28 crc kubenswrapper[4714]: I0129 16:14:28.585985 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 16:14:28 crc kubenswrapper[4714]: I0129 16:14:28.696317 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 16:14:28 crc kubenswrapper[4714]: I0129 16:14:28.788207 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 16:14:28 crc kubenswrapper[4714]: I0129 16:14:28.939879 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 16:14:29 crc kubenswrapper[4714]: I0129 16:14:29.348190 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 16:14:29 crc kubenswrapper[4714]: I0129 16:14:29.389566 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 16:14:30 crc kubenswrapper[4714]: I0129 16:14:30.015694 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 16:14:30 crc kubenswrapper[4714]: I0129 16:14:30.326636 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 16:14:30 crc kubenswrapper[4714]: I0129 16:14:30.335299 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 16:14:30 crc kubenswrapper[4714]: I0129 16:14:30.481724 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 16:14:30 crc kubenswrapper[4714]: I0129 16:14:30.502574 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 16:14:30 crc kubenswrapper[4714]: I0129 16:14:30.517517 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 16:14:30 crc kubenswrapper[4714]: I0129 16:14:30.654174 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 16:14:30 crc kubenswrapper[4714]: I0129 16:14:30.676918 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 16:14:30 crc kubenswrapper[4714]: I0129 16:14:30.697743 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 16:14:30 crc kubenswrapper[4714]: I0129 16:14:30.733763 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 16:14:30 crc kubenswrapper[4714]: I0129 16:14:30.860426 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 16:14:30 crc kubenswrapper[4714]: I0129 16:14:30.986107 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 16:14:31 crc kubenswrapper[4714]: I0129 16:14:31.038215 4714 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 16:14:31 crc kubenswrapper[4714]: I0129 16:14:31.058012 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 16:14:31 crc kubenswrapper[4714]: I0129 16:14:31.085042 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 16:14:31 crc kubenswrapper[4714]: I0129 16:14:31.176563 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 16:14:31 crc kubenswrapper[4714]: I0129 16:14:31.186671 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 16:14:31 crc kubenswrapper[4714]: I0129 16:14:31.241506 4714 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 16:14:31 crc kubenswrapper[4714]: I0129 16:14:31.297161 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 16:14:31 crc kubenswrapper[4714]: I0129 16:14:31.387720 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 16:14:31 crc kubenswrapper[4714]: I0129 16:14:31.413373 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 16:14:31 crc kubenswrapper[4714]: I0129 16:14:31.433537 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 16:14:31 crc kubenswrapper[4714]: I0129 16:14:31.488522 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 16:14:31 crc kubenswrapper[4714]: I0129 16:14:31.490775 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 16:14:31 crc kubenswrapper[4714]: I0129 16:14:31.504069 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 16:14:31 crc kubenswrapper[4714]: I0129 16:14:31.580899 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 16:14:31 crc kubenswrapper[4714]: I0129 16:14:31.586343 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 16:14:31 crc kubenswrapper[4714]: I0129 16:14:31.649732 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 16:14:31 crc kubenswrapper[4714]: I0129 16:14:31.675018 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 16:14:31 crc kubenswrapper[4714]: I0129 16:14:31.716404 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 16:14:31 crc kubenswrapper[4714]: I0129 16:14:31.765726 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 16:14:31 crc kubenswrapper[4714]: I0129 16:14:31.775541 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 16:14:31 crc kubenswrapper[4714]: I0129 16:14:31.790558 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 16:14:32 crc kubenswrapper[4714]: I0129 16:14:32.000871 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 16:14:32 crc kubenswrapper[4714]: I0129 16:14:32.048986 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 16:14:32 crc kubenswrapper[4714]: I0129 16:14:32.118777 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 16:14:32 crc kubenswrapper[4714]: I0129 16:14:32.123501 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 16:14:32 crc kubenswrapper[4714]: I0129 16:14:32.249262 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 16:14:32 crc kubenswrapper[4714]: I0129 16:14:32.249262 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 16:14:32 crc kubenswrapper[4714]: I0129 16:14:32.332242 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 16:14:32 crc kubenswrapper[4714]: I0129 16:14:32.354099 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 16:14:32 crc kubenswrapper[4714]: I0129 16:14:32.373598 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 16:14:32 crc kubenswrapper[4714]: I0129 16:14:32.500759 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 16:14:32 crc kubenswrapper[4714]: I0129 16:14:32.589645 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 16:14:32 crc kubenswrapper[4714]: I0129 16:14:32.673304 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 16:14:32 crc kubenswrapper[4714]: I0129 16:14:32.699695 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 16:14:32 crc kubenswrapper[4714]: I0129 16:14:32.721216 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 16:14:32 crc kubenswrapper[4714]: I0129 16:14:32.835761 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 16:14:32 crc kubenswrapper[4714]: I0129 16:14:32.959689 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 16:14:32 crc kubenswrapper[4714]: I0129 16:14:32.962276 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 16:14:32 crc kubenswrapper[4714]: I0129 16:14:32.978901 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 16:14:33 crc kubenswrapper[4714]: I0129 16:14:33.080383 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 16:14:33 crc kubenswrapper[4714]: I0129 16:14:33.179509 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 16:14:33 crc kubenswrapper[4714]: I0129 16:14:33.285836 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 16:14:33 crc kubenswrapper[4714]: I0129 16:14:33.293658 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 16:14:33 crc kubenswrapper[4714]: I0129 16:14:33.392287 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 16:14:33 crc kubenswrapper[4714]: I0129 16:14:33.410685 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 16:14:33 crc kubenswrapper[4714]: I0129 16:14:33.473831 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 16:14:33 crc kubenswrapper[4714]: I0129 16:14:33.672894 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 16:14:33 crc kubenswrapper[4714]: I0129 16:14:33.685036 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 16:14:33 crc kubenswrapper[4714]: I0129 16:14:33.789210 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 16:14:33 crc kubenswrapper[4714]: I0129 16:14:33.844323 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 16:14:33 crc kubenswrapper[4714]: I0129 16:14:33.878921 4714 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 16:14:33 crc kubenswrapper[4714]: I0129 16:14:33.887408 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 16:14:33 crc kubenswrapper[4714]: I0129 16:14:33.889633 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 16:14:33 crc kubenswrapper[4714]: I0129 16:14:33.948954 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 16:14:34 crc kubenswrapper[4714]: I0129 16:14:34.108735 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 16:14:34 crc kubenswrapper[4714]: I0129 16:14:34.239762 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 16:14:34 crc kubenswrapper[4714]: I0129 16:14:34.333830 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 16:14:34 crc kubenswrapper[4714]: I0129 16:14:34.383057 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 16:14:34 crc kubenswrapper[4714]: I0129 16:14:34.398480 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 16:14:34 crc kubenswrapper[4714]: I0129 16:14:34.547737 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 16:14:34 crc kubenswrapper[4714]: I0129 16:14:34.556730 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 16:14:34 crc kubenswrapper[4714]: I0129 16:14:34.621641 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 16:14:34 crc kubenswrapper[4714]: I0129 16:14:34.679257 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 16:14:34 crc kubenswrapper[4714]: I0129 16:14:34.681802 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 16:14:34 crc kubenswrapper[4714]: I0129 16:14:34.701506 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 16:14:34 crc kubenswrapper[4714]: I0129 16:14:34.723289 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 16:14:34 crc kubenswrapper[4714]: I0129 16:14:34.766008 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 16:14:34 crc kubenswrapper[4714]: I0129 16:14:34.935035 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 16:14:35 crc kubenswrapper[4714]: I0129 16:14:35.082392 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 16:14:35 crc kubenswrapper[4714]: I0129 16:14:35.121414 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 16:14:35 crc kubenswrapper[4714]: I0129 16:14:35.216967 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 16:14:35 crc kubenswrapper[4714]: I0129 16:14:35.292146 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 16:14:35 crc kubenswrapper[4714]: I0129 16:14:35.329058 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 16:14:35 crc kubenswrapper[4714]: I0129 16:14:35.329057 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 16:14:35 crc kubenswrapper[4714]: I0129 16:14:35.374417 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 16:14:35 crc kubenswrapper[4714]: I0129 16:14:35.426806 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 16:14:35 crc kubenswrapper[4714]: I0129 16:14:35.481877 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 16:14:35 crc kubenswrapper[4714]: I0129 16:14:35.546432 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 16:14:35 crc kubenswrapper[4714]: I0129 16:14:35.616915 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 16:14:35 crc kubenswrapper[4714]: I0129 16:14:35.662851 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 16:14:35 crc kubenswrapper[4714]: I0129 16:14:35.734541 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 16:14:35 crc kubenswrapper[4714]: I0129 16:14:35.742604 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 16:14:35 crc kubenswrapper[4714]: I0129 16:14:35.833239 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 16:14:35 crc kubenswrapper[4714]: I0129 16:14:35.845876 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 16:14:35 crc kubenswrapper[4714]: I0129 16:14:35.875121 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 16:14:35 crc kubenswrapper[4714]: I0129 16:14:35.902661 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 16:14:36 crc kubenswrapper[4714]: I0129 16:14:36.089869 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 16:14:36 crc kubenswrapper[4714]: I0129 16:14:36.177224 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 16:14:36 crc kubenswrapper[4714]: I0129 16:14:36.277910 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 16:14:36 crc kubenswrapper[4714]: I0129 16:14:36.343140 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 16:14:36 crc kubenswrapper[4714]: I0129 16:14:36.369839 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 16:14:36 crc kubenswrapper[4714]: I0129 16:14:36.379487 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 16:14:36 crc kubenswrapper[4714]: I0129 16:14:36.408357 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 16:14:36 crc kubenswrapper[4714]: I0129 16:14:36.512516 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 16:14:36 crc kubenswrapper[4714]: I0129 16:14:36.544443 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 16:14:36 crc kubenswrapper[4714]: I0129 16:14:36.650491 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 16:14:36 crc kubenswrapper[4714]: I0129 16:14:36.667915 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 16:14:36 crc kubenswrapper[4714]: I0129 16:14:36.669805 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 16:14:36 crc kubenswrapper[4714]: I0129 16:14:36.692864 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 16:14:36 crc kubenswrapper[4714]: I0129 16:14:36.757064 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 16:14:36 crc kubenswrapper[4714]: I0129 16:14:36.780365 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 16:14:36 crc kubenswrapper[4714]: I0129 16:14:36.842593 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 16:14:36 crc kubenswrapper[4714]: I0129 16:14:36.868861 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 16:14:36 crc kubenswrapper[4714]: I0129 16:14:36.981490 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 16:14:36 crc kubenswrapper[4714]: I0129 16:14:36.995670 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.025002 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.111419 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.205792 4714 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.210901 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-h8b4r"] Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.210990 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg","openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 16:14:37 crc kubenswrapper[4714]: E0129 16:14:37.211182 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24" containerName="installer" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.211195 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24" containerName="installer" Jan 29 16:14:37 crc kubenswrapper[4714]: E0129 16:14:37.211207 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="832097a5-4691-42b6-99cc-38679071d5ee" containerName="oauth-openshift" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.211215 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="832097a5-4691-42b6-99cc-38679071d5ee" containerName="oauth-openshift" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.211333 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="832097a5-4691-42b6-99cc-38679071d5ee" containerName="oauth-openshift" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.211348 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cd159bf-dc15-4cdb-a61a-97c8b5ee4f24" containerName="installer" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.211705 4714 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="810f3b92-c43d-41fd-8a5f-0f926ed63e50" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.211745 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.211754 4714 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="810f3b92-c43d-41fd-8a5f-0f926ed63e50" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.213073 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.214632 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.215494 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.215692 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.215846 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.216091 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.216154 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.216525 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.219475 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.222705 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.222916 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.223130 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.223144 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.224060 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.244480 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.255725 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.260739 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.273008 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.280663 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.287665 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.287643797 podStartE2EDuration="21.287643797s" podCreationTimestamp="2026-01-29 16:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:37.286552176 +0000 UTC m=+283.807053336" watchObservedRunningTime="2026-01-29 16:14:37.287643797 +0000 UTC m=+283.808144957" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.332153 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.347464 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.347608 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.347655 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4464af5b-c16f-46b7-b29b-c22fa78843f3-audit-dir\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.347686 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-user-template-login\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.347737 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-system-service-ca\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.347832 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.347868 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-system-session\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.347894 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-user-template-error\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.347929 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.348001 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.348113 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-system-router-certs\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.348195 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fdhw\" (UniqueName: \"kubernetes.io/projected/4464af5b-c16f-46b7-b29b-c22fa78843f3-kube-api-access-6fdhw\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.348239 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4464af5b-c16f-46b7-b29b-c22fa78843f3-audit-policies\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.348292 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.377726 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.389041 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.412304 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.431630 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.437049 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.449778 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fdhw\" (UniqueName: \"kubernetes.io/projected/4464af5b-c16f-46b7-b29b-c22fa78843f3-kube-api-access-6fdhw\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.449866 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4464af5b-c16f-46b7-b29b-c22fa78843f3-audit-policies\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.450012 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.450092 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.450156 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.450218 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4464af5b-c16f-46b7-b29b-c22fa78843f3-audit-dir\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.450268 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-user-template-login\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.450351 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-system-service-ca\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.450420 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.450449 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4464af5b-c16f-46b7-b29b-c22fa78843f3-audit-dir\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.450477 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-system-session\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.451517 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4464af5b-c16f-46b7-b29b-c22fa78843f3-audit-policies\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.451761 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-user-template-error\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.451834 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.452308 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.452407 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-system-router-certs\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.452412 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.454375 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.456464 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.456557 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-system-service-ca\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.456735 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-user-template-login\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.457571 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.460117 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.465631 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.466051 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-user-template-error\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.467273 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-system-session\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.469483 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4464af5b-c16f-46b7-b29b-c22fa78843f3-v4-0-config-system-router-certs\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.481549 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fdhw\" (UniqueName: \"kubernetes.io/projected/4464af5b-c16f-46b7-b29b-c22fa78843f3-kube-api-access-6fdhw\") pod \"oauth-openshift-7cddd88c7f-kq5gg\" (UID: \"4464af5b-c16f-46b7-b29b-c22fa78843f3\") " pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.538841 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.581228 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.647691 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.648035 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.669696 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.709029 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.711407 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.735171 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.827460 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg"] Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.978674 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 16:14:37 crc kubenswrapper[4714]: I0129 16:14:37.988638 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.098864 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.131277 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.164263 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.166391 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.195651 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="832097a5-4691-42b6-99cc-38679071d5ee" path="/var/lib/kubelet/pods/832097a5-4691-42b6-99cc-38679071d5ee/volumes" Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.241272 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.274486 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.293597 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.363635 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.467989 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.513595 4714 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.513972 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.514042 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.514974 4714 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"e41d24707f09801e897cb0baf4db02b0644cf5ce6dabf14163675df4678cd1d1"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.515168 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://e41d24707f09801e897cb0baf4db02b0644cf5ce6dabf14163675df4678cd1d1" gracePeriod=30 Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.533600 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.574999 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.592806 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.593744 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.635460 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.649716 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" event={"ID":"4464af5b-c16f-46b7-b29b-c22fa78843f3","Type":"ContainerStarted","Data":"fa827496154097254ba766d8a3902e298682ee4add2b5c9ea235ac1d29c3d416"} Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.649793 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" event={"ID":"4464af5b-c16f-46b7-b29b-c22fa78843f3","Type":"ContainerStarted","Data":"8160a991a75a27207a1d80dd0e4fdf5282d45ed57f5710eb653b00bc218edd8f"} Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.650361 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.693370 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.818374 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.837949 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.901967 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.908429 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.935233 4714 patch_prober.go:28] interesting pod/oauth-openshift-7cddd88c7f-kq5gg container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": read tcp 10.217.0.2:49984->10.217.0.56:6443: read: connection reset by peer" start-of-body= Jan 29 16:14:38 crc kubenswrapper[4714]: I0129 16:14:38.935296 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" podUID="4464af5b-c16f-46b7-b29b-c22fa78843f3" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": read tcp 10.217.0.2:49984->10.217.0.56:6443: read: connection reset by peer" Jan 29 16:14:39 crc kubenswrapper[4714]: I0129 16:14:39.001566 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 16:14:39 crc kubenswrapper[4714]: I0129 16:14:39.028143 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 16:14:39 crc kubenswrapper[4714]: I0129 16:14:39.050531 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 16:14:39 crc kubenswrapper[4714]: I0129 16:14:39.061248 4714 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 16:14:39 crc kubenswrapper[4714]: I0129 16:14:39.061586 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://bab9b90f5d182c4ebe5eb811fac6ec3fd2ecd7c7c38b72ecfd4df4e4a0e90141" gracePeriod=5 Jan 29 16:14:39 crc kubenswrapper[4714]: I0129 16:14:39.100113 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 16:14:39 crc kubenswrapper[4714]: I0129 16:14:39.103401 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 16:14:39 crc kubenswrapper[4714]: I0129 16:14:39.106042 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 16:14:39 crc kubenswrapper[4714]: I0129 16:14:39.122825 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 16:14:39 crc kubenswrapper[4714]: I0129 16:14:39.135882 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 16:14:39 crc kubenswrapper[4714]: I0129 16:14:39.205587 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 16:14:39 crc kubenswrapper[4714]: I0129 16:14:39.211091 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 16:14:39 crc kubenswrapper[4714]: I0129 16:14:39.218550 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 16:14:39 crc kubenswrapper[4714]: I0129 16:14:39.322473 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 16:14:39 crc kubenswrapper[4714]: I0129 16:14:39.335116 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 16:14:39 crc kubenswrapper[4714]: I0129 16:14:39.363275 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 16:14:39 crc kubenswrapper[4714]: I0129 16:14:39.402051 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 16:14:39 crc kubenswrapper[4714]: I0129 16:14:39.411506 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 16:14:39 crc kubenswrapper[4714]: I0129 16:14:39.479644 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 16:14:39 crc kubenswrapper[4714]: I0129 16:14:39.486811 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 16:14:39 crc kubenswrapper[4714]: I0129 16:14:39.657181 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-7cddd88c7f-kq5gg_4464af5b-c16f-46b7-b29b-c22fa78843f3/oauth-openshift/0.log" Jan 29 16:14:39 crc kubenswrapper[4714]: I0129 16:14:39.657230 4714 generic.go:334] "Generic (PLEG): container finished" podID="4464af5b-c16f-46b7-b29b-c22fa78843f3" containerID="fa827496154097254ba766d8a3902e298682ee4add2b5c9ea235ac1d29c3d416" exitCode=255 Jan 29 16:14:39 crc kubenswrapper[4714]: I0129 16:14:39.657260 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" event={"ID":"4464af5b-c16f-46b7-b29b-c22fa78843f3","Type":"ContainerDied","Data":"fa827496154097254ba766d8a3902e298682ee4add2b5c9ea235ac1d29c3d416"} Jan 29 16:14:39 crc kubenswrapper[4714]: I0129 16:14:39.657704 4714 scope.go:117] "RemoveContainer" containerID="fa827496154097254ba766d8a3902e298682ee4add2b5c9ea235ac1d29c3d416" Jan 29 16:14:39 crc kubenswrapper[4714]: I0129 16:14:39.957246 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 16:14:40 crc kubenswrapper[4714]: I0129 16:14:40.017594 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 16:14:40 crc kubenswrapper[4714]: I0129 16:14:40.056060 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 16:14:40 crc kubenswrapper[4714]: I0129 16:14:40.074373 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 16:14:40 crc kubenswrapper[4714]: I0129 16:14:40.141630 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 16:14:40 crc kubenswrapper[4714]: I0129 16:14:40.164994 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 16:14:40 crc kubenswrapper[4714]: I0129 16:14:40.205080 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 16:14:40 crc kubenswrapper[4714]: I0129 16:14:40.305734 4714 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 16:14:40 crc kubenswrapper[4714]: I0129 16:14:40.364623 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 16:14:40 crc kubenswrapper[4714]: I0129 16:14:40.576116 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 16:14:40 crc kubenswrapper[4714]: I0129 16:14:40.665974 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-7cddd88c7f-kq5gg_4464af5b-c16f-46b7-b29b-c22fa78843f3/oauth-openshift/0.log" Jan 29 16:14:40 crc kubenswrapper[4714]: I0129 16:14:40.666036 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" event={"ID":"4464af5b-c16f-46b7-b29b-c22fa78843f3","Type":"ContainerStarted","Data":"9abc3d29fc5ca7a4d8cf8aa61dd76ea3277aa26f99a7eaae8b274e765a003eda"} Jan 29 16:14:40 crc kubenswrapper[4714]: I0129 16:14:40.667431 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:40 crc kubenswrapper[4714]: I0129 16:14:40.676674 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" Jan 29 16:14:40 crc kubenswrapper[4714]: I0129 16:14:40.697832 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7cddd88c7f-kq5gg" podStartSLOduration=45.697816919 podStartE2EDuration="45.697816919s" podCreationTimestamp="2026-01-29 16:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:38.681828364 +0000 UTC m=+285.202329514" watchObservedRunningTime="2026-01-29 16:14:40.697816919 +0000 UTC m=+287.218318039" Jan 29 16:14:40 crc kubenswrapper[4714]: I0129 16:14:40.843658 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 16:14:40 crc kubenswrapper[4714]: I0129 16:14:40.876437 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 16:14:40 crc kubenswrapper[4714]: I0129 16:14:40.889398 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 16:14:40 crc kubenswrapper[4714]: I0129 16:14:40.998437 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 16:14:41 crc kubenswrapper[4714]: I0129 16:14:41.047591 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 16:14:41 crc kubenswrapper[4714]: I0129 16:14:41.072760 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 16:14:41 crc kubenswrapper[4714]: I0129 16:14:41.170079 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 16:14:41 crc kubenswrapper[4714]: I0129 16:14:41.266330 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 16:14:41 crc kubenswrapper[4714]: I0129 16:14:41.325698 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 16:14:41 crc kubenswrapper[4714]: I0129 16:14:41.402810 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 16:14:41 crc kubenswrapper[4714]: I0129 16:14:41.543523 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 16:14:41 crc kubenswrapper[4714]: I0129 16:14:41.548628 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 16:14:41 crc kubenswrapper[4714]: I0129 16:14:41.587522 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 16:14:42 crc kubenswrapper[4714]: I0129 16:14:42.038985 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 16:14:42 crc kubenswrapper[4714]: I0129 16:14:42.115274 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 16:14:42 crc kubenswrapper[4714]: I0129 16:14:42.181925 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 16:14:42 crc kubenswrapper[4714]: I0129 16:14:42.226878 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 16:14:42 crc kubenswrapper[4714]: I0129 16:14:42.245580 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 16:14:42 crc kubenswrapper[4714]: I0129 16:14:42.301654 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 16:14:42 crc kubenswrapper[4714]: I0129 16:14:42.376251 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 16:14:42 crc kubenswrapper[4714]: I0129 16:14:42.497661 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 16:14:42 crc kubenswrapper[4714]: I0129 16:14:42.735457 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 16:14:42 crc kubenswrapper[4714]: I0129 16:14:42.823230 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 16:14:42 crc kubenswrapper[4714]: I0129 16:14:42.973979 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 16:14:43 crc kubenswrapper[4714]: I0129 16:14:43.087247 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 16:14:43 crc kubenswrapper[4714]: I0129 16:14:43.182886 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 16:14:43 crc kubenswrapper[4714]: I0129 16:14:43.325490 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 16:14:43 crc kubenswrapper[4714]: I0129 16:14:43.347348 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 16:14:43 crc kubenswrapper[4714]: I0129 16:14:43.988411 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.112562 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.128479 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.342895 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.607391 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.621384 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.667130 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.667258 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.691522 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.691598 4714 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="bab9b90f5d182c4ebe5eb811fac6ec3fd2ecd7c7c38b72ecfd4df4e4a0e90141" exitCode=137 Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.691638 4714 scope.go:117] "RemoveContainer" containerID="bab9b90f5d182c4ebe5eb811fac6ec3fd2ecd7c7c38b72ecfd4df4e4a0e90141" Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.691741 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.716562 4714 scope.go:117] "RemoveContainer" containerID="bab9b90f5d182c4ebe5eb811fac6ec3fd2ecd7c7c38b72ecfd4df4e4a0e90141" Jan 29 16:14:44 crc kubenswrapper[4714]: E0129 16:14:44.717175 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bab9b90f5d182c4ebe5eb811fac6ec3fd2ecd7c7c38b72ecfd4df4e4a0e90141\": container with ID starting with bab9b90f5d182c4ebe5eb811fac6ec3fd2ecd7c7c38b72ecfd4df4e4a0e90141 not found: ID does not exist" containerID="bab9b90f5d182c4ebe5eb811fac6ec3fd2ecd7c7c38b72ecfd4df4e4a0e90141" Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.717220 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bab9b90f5d182c4ebe5eb811fac6ec3fd2ecd7c7c38b72ecfd4df4e4a0e90141"} err="failed to get container status \"bab9b90f5d182c4ebe5eb811fac6ec3fd2ecd7c7c38b72ecfd4df4e4a0e90141\": rpc error: code = NotFound desc = could not find container \"bab9b90f5d182c4ebe5eb811fac6ec3fd2ecd7c7c38b72ecfd4df4e4a0e90141\": container with ID starting with bab9b90f5d182c4ebe5eb811fac6ec3fd2ecd7c7c38b72ecfd4df4e4a0e90141 not found: ID does not exist" Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.853309 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.853413 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.853461 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.853545 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.853576 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.853639 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.853656 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.853698 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.853817 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.854062 4714 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.854098 4714 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.854126 4714 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.854150 4714 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.866177 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:14:44 crc kubenswrapper[4714]: I0129 16:14:44.956945 4714 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:45 crc kubenswrapper[4714]: I0129 16:14:45.373359 4714 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 16:14:46 crc kubenswrapper[4714]: I0129 16:14:46.195132 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 29 16:14:53 crc kubenswrapper[4714]: I0129 16:14:53.987133 4714 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 29 16:15:08 crc kubenswrapper[4714]: I0129 16:15:08.851220 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 29 16:15:08 crc kubenswrapper[4714]: I0129 16:15:08.856782 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 16:15:08 crc kubenswrapper[4714]: I0129 16:15:08.856874 4714 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e41d24707f09801e897cb0baf4db02b0644cf5ce6dabf14163675df4678cd1d1" exitCode=137 Jan 29 16:15:08 crc kubenswrapper[4714]: I0129 16:15:08.856920 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e41d24707f09801e897cb0baf4db02b0644cf5ce6dabf14163675df4678cd1d1"} Jan 29 16:15:08 crc kubenswrapper[4714]: I0129 16:15:08.857023 4714 scope.go:117] "RemoveContainer" containerID="d8783a80e27263947194a6eff395452265c090e1a3e8155cb7cf12ec80eea835" Jan 29 16:15:09 crc kubenswrapper[4714]: I0129 16:15:09.866698 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 29 16:15:09 crc kubenswrapper[4714]: I0129 16:15:09.868775 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a65c6860cf9526e7c6f9b8c6dca74bc104279387d0f33f88b0361f6e16f3cd32"} Jan 29 16:15:10 crc kubenswrapper[4714]: I0129 16:15:10.178461 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:15:18 crc kubenswrapper[4714]: I0129 16:15:18.513706 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:15:18 crc kubenswrapper[4714]: I0129 16:15:18.521547 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:15:20 crc kubenswrapper[4714]: I0129 16:15:20.195276 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.073900 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495055-4hxfc"] Jan 29 16:15:28 crc kubenswrapper[4714]: E0129 16:15:28.074500 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.074511 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.074595 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.074953 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4hxfc" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.076714 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.081211 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495055-4hxfc"] Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.083026 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.102145 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjwmg\" (UniqueName: \"kubernetes.io/projected/d0d972c0-6998-401f-8f0a-5bea6ed5590f-kube-api-access-sjwmg\") pod \"collect-profiles-29495055-4hxfc\" (UID: \"d0d972c0-6998-401f-8f0a-5bea6ed5590f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4hxfc" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.102187 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0d972c0-6998-401f-8f0a-5bea6ed5590f-secret-volume\") pod \"collect-profiles-29495055-4hxfc\" (UID: \"d0d972c0-6998-401f-8f0a-5bea6ed5590f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4hxfc" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.102234 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0d972c0-6998-401f-8f0a-5bea6ed5590f-config-volume\") pod \"collect-profiles-29495055-4hxfc\" (UID: \"d0d972c0-6998-401f-8f0a-5bea6ed5590f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4hxfc" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.109375 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xlczd"] Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.109586 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" podUID="3c2d0611-58f8-4a7e-8280-361c80d62802" containerName="controller-manager" containerID="cri-o://1a48e7995b6652b74e320bf25807bcc51cd5f49496615ac9028d0bf40f37019e" gracePeriod=30 Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.115829 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw"] Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.116031 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw" podUID="fbfdd647-1d64-4d35-9af2-6dee52b4c860" containerName="route-controller-manager" containerID="cri-o://3e427856d41457853f68d5a32dc2f87168c0f52da592bc0a2b3db34f710fea2b" gracePeriod=30 Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.203048 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0d972c0-6998-401f-8f0a-5bea6ed5590f-config-volume\") pod \"collect-profiles-29495055-4hxfc\" (UID: \"d0d972c0-6998-401f-8f0a-5bea6ed5590f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4hxfc" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.203119 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjwmg\" (UniqueName: \"kubernetes.io/projected/d0d972c0-6998-401f-8f0a-5bea6ed5590f-kube-api-access-sjwmg\") pod \"collect-profiles-29495055-4hxfc\" (UID: \"d0d972c0-6998-401f-8f0a-5bea6ed5590f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4hxfc" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.203146 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0d972c0-6998-401f-8f0a-5bea6ed5590f-secret-volume\") pod \"collect-profiles-29495055-4hxfc\" (UID: \"d0d972c0-6998-401f-8f0a-5bea6ed5590f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4hxfc" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.204188 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0d972c0-6998-401f-8f0a-5bea6ed5590f-config-volume\") pod \"collect-profiles-29495055-4hxfc\" (UID: \"d0d972c0-6998-401f-8f0a-5bea6ed5590f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4hxfc" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.209692 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0d972c0-6998-401f-8f0a-5bea6ed5590f-secret-volume\") pod \"collect-profiles-29495055-4hxfc\" (UID: \"d0d972c0-6998-401f-8f0a-5bea6ed5590f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4hxfc" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.233994 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjwmg\" (UniqueName: \"kubernetes.io/projected/d0d972c0-6998-401f-8f0a-5bea6ed5590f-kube-api-access-sjwmg\") pod \"collect-profiles-29495055-4hxfc\" (UID: \"d0d972c0-6998-401f-8f0a-5bea6ed5590f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4hxfc" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.390246 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4hxfc" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.490864 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.511872 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbfdd647-1d64-4d35-9af2-6dee52b4c860-serving-cert\") pod \"fbfdd647-1d64-4d35-9af2-6dee52b4c860\" (UID: \"fbfdd647-1d64-4d35-9af2-6dee52b4c860\") " Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.511909 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbfdd647-1d64-4d35-9af2-6dee52b4c860-config\") pod \"fbfdd647-1d64-4d35-9af2-6dee52b4c860\" (UID: \"fbfdd647-1d64-4d35-9af2-6dee52b4c860\") " Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.511946 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fbfdd647-1d64-4d35-9af2-6dee52b4c860-client-ca\") pod \"fbfdd647-1d64-4d35-9af2-6dee52b4c860\" (UID: \"fbfdd647-1d64-4d35-9af2-6dee52b4c860\") " Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.512016 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srhd7\" (UniqueName: \"kubernetes.io/projected/fbfdd647-1d64-4d35-9af2-6dee52b4c860-kube-api-access-srhd7\") pod \"fbfdd647-1d64-4d35-9af2-6dee52b4c860\" (UID: \"fbfdd647-1d64-4d35-9af2-6dee52b4c860\") " Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.513136 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbfdd647-1d64-4d35-9af2-6dee52b4c860-config" (OuterVolumeSpecName: "config") pod "fbfdd647-1d64-4d35-9af2-6dee52b4c860" (UID: "fbfdd647-1d64-4d35-9af2-6dee52b4c860"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.513850 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbfdd647-1d64-4d35-9af2-6dee52b4c860-client-ca" (OuterVolumeSpecName: "client-ca") pod "fbfdd647-1d64-4d35-9af2-6dee52b4c860" (UID: "fbfdd647-1d64-4d35-9af2-6dee52b4c860"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.527507 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbfdd647-1d64-4d35-9af2-6dee52b4c860-kube-api-access-srhd7" (OuterVolumeSpecName: "kube-api-access-srhd7") pod "fbfdd647-1d64-4d35-9af2-6dee52b4c860" (UID: "fbfdd647-1d64-4d35-9af2-6dee52b4c860"). InnerVolumeSpecName "kube-api-access-srhd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.532298 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbfdd647-1d64-4d35-9af2-6dee52b4c860-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fbfdd647-1d64-4d35-9af2-6dee52b4c860" (UID: "fbfdd647-1d64-4d35-9af2-6dee52b4c860"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.546582 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.613181 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c2d0611-58f8-4a7e-8280-361c80d62802-config\") pod \"3c2d0611-58f8-4a7e-8280-361c80d62802\" (UID: \"3c2d0611-58f8-4a7e-8280-361c80d62802\") " Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.613249 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c2d0611-58f8-4a7e-8280-361c80d62802-proxy-ca-bundles\") pod \"3c2d0611-58f8-4a7e-8280-361c80d62802\" (UID: \"3c2d0611-58f8-4a7e-8280-361c80d62802\") " Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.613316 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pk8rh\" (UniqueName: \"kubernetes.io/projected/3c2d0611-58f8-4a7e-8280-361c80d62802-kube-api-access-pk8rh\") pod \"3c2d0611-58f8-4a7e-8280-361c80d62802\" (UID: \"3c2d0611-58f8-4a7e-8280-361c80d62802\") " Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.613340 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c2d0611-58f8-4a7e-8280-361c80d62802-serving-cert\") pod \"3c2d0611-58f8-4a7e-8280-361c80d62802\" (UID: \"3c2d0611-58f8-4a7e-8280-361c80d62802\") " Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.613493 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c2d0611-58f8-4a7e-8280-361c80d62802-client-ca\") pod \"3c2d0611-58f8-4a7e-8280-361c80d62802\" (UID: \"3c2d0611-58f8-4a7e-8280-361c80d62802\") " Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.613846 4714 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbfdd647-1d64-4d35-9af2-6dee52b4c860-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.613869 4714 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbfdd647-1d64-4d35-9af2-6dee52b4c860-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.613881 4714 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fbfdd647-1d64-4d35-9af2-6dee52b4c860-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.613895 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srhd7\" (UniqueName: \"kubernetes.io/projected/fbfdd647-1d64-4d35-9af2-6dee52b4c860-kube-api-access-srhd7\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.614218 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c2d0611-58f8-4a7e-8280-361c80d62802-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3c2d0611-58f8-4a7e-8280-361c80d62802" (UID: "3c2d0611-58f8-4a7e-8280-361c80d62802"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.614582 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c2d0611-58f8-4a7e-8280-361c80d62802-client-ca" (OuterVolumeSpecName: "client-ca") pod "3c2d0611-58f8-4a7e-8280-361c80d62802" (UID: "3c2d0611-58f8-4a7e-8280-361c80d62802"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.616121 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c2d0611-58f8-4a7e-8280-361c80d62802-config" (OuterVolumeSpecName: "config") pod "3c2d0611-58f8-4a7e-8280-361c80d62802" (UID: "3c2d0611-58f8-4a7e-8280-361c80d62802"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.618568 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2d0611-58f8-4a7e-8280-361c80d62802-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3c2d0611-58f8-4a7e-8280-361c80d62802" (UID: "3c2d0611-58f8-4a7e-8280-361c80d62802"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.618712 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c2d0611-58f8-4a7e-8280-361c80d62802-kube-api-access-pk8rh" (OuterVolumeSpecName: "kube-api-access-pk8rh") pod "3c2d0611-58f8-4a7e-8280-361c80d62802" (UID: "3c2d0611-58f8-4a7e-8280-361c80d62802"). InnerVolumeSpecName "kube-api-access-pk8rh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.714912 4714 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c2d0611-58f8-4a7e-8280-361c80d62802-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.714960 4714 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c2d0611-58f8-4a7e-8280-361c80d62802-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.714970 4714 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c2d0611-58f8-4a7e-8280-361c80d62802-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.714981 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pk8rh\" (UniqueName: \"kubernetes.io/projected/3c2d0611-58f8-4a7e-8280-361c80d62802-kube-api-access-pk8rh\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.714990 4714 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c2d0611-58f8-4a7e-8280-361c80d62802-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.868106 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495055-4hxfc"] Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.983680 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4hxfc" event={"ID":"d0d972c0-6998-401f-8f0a-5bea6ed5590f","Type":"ContainerStarted","Data":"b6b3b048229369338cb18b65251b8d1da3ff36603f5a4de17d64b736eefdc448"} Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.985009 4714 generic.go:334] "Generic (PLEG): container finished" podID="3c2d0611-58f8-4a7e-8280-361c80d62802" containerID="1a48e7995b6652b74e320bf25807bcc51cd5f49496615ac9028d0bf40f37019e" exitCode=0 Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.985052 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" event={"ID":"3c2d0611-58f8-4a7e-8280-361c80d62802","Type":"ContainerDied","Data":"1a48e7995b6652b74e320bf25807bcc51cd5f49496615ac9028d0bf40f37019e"} Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.985093 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.985120 4714 scope.go:117] "RemoveContainer" containerID="1a48e7995b6652b74e320bf25807bcc51cd5f49496615ac9028d0bf40f37019e" Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.985101 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xlczd" event={"ID":"3c2d0611-58f8-4a7e-8280-361c80d62802","Type":"ContainerDied","Data":"997bb46f3e8548114daabdb0676e47c164f03b6651e1e3ef03b31f66106dbebd"} Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.988270 4714 generic.go:334] "Generic (PLEG): container finished" podID="fbfdd647-1d64-4d35-9af2-6dee52b4c860" containerID="3e427856d41457853f68d5a32dc2f87168c0f52da592bc0a2b3db34f710fea2b" exitCode=0 Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.988309 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw" event={"ID":"fbfdd647-1d64-4d35-9af2-6dee52b4c860","Type":"ContainerDied","Data":"3e427856d41457853f68d5a32dc2f87168c0f52da592bc0a2b3db34f710fea2b"} Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.988332 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw" event={"ID":"fbfdd647-1d64-4d35-9af2-6dee52b4c860","Type":"ContainerDied","Data":"79940598fef6f2445dc05d94ab28a7d984953a342201b3331c2b27e4796135a0"} Jan 29 16:15:28 crc kubenswrapper[4714]: I0129 16:15:28.988376 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw" Jan 29 16:15:29 crc kubenswrapper[4714]: I0129 16:15:29.004640 4714 scope.go:117] "RemoveContainer" containerID="1a48e7995b6652b74e320bf25807bcc51cd5f49496615ac9028d0bf40f37019e" Jan 29 16:15:29 crc kubenswrapper[4714]: E0129 16:15:29.005258 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a48e7995b6652b74e320bf25807bcc51cd5f49496615ac9028d0bf40f37019e\": container with ID starting with 1a48e7995b6652b74e320bf25807bcc51cd5f49496615ac9028d0bf40f37019e not found: ID does not exist" containerID="1a48e7995b6652b74e320bf25807bcc51cd5f49496615ac9028d0bf40f37019e" Jan 29 16:15:29 crc kubenswrapper[4714]: I0129 16:15:29.005304 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a48e7995b6652b74e320bf25807bcc51cd5f49496615ac9028d0bf40f37019e"} err="failed to get container status \"1a48e7995b6652b74e320bf25807bcc51cd5f49496615ac9028d0bf40f37019e\": rpc error: code = NotFound desc = could not find container \"1a48e7995b6652b74e320bf25807bcc51cd5f49496615ac9028d0bf40f37019e\": container with ID starting with 1a48e7995b6652b74e320bf25807bcc51cd5f49496615ac9028d0bf40f37019e not found: ID does not exist" Jan 29 16:15:29 crc kubenswrapper[4714]: I0129 16:15:29.005333 4714 scope.go:117] "RemoveContainer" containerID="3e427856d41457853f68d5a32dc2f87168c0f52da592bc0a2b3db34f710fea2b" Jan 29 16:15:29 crc kubenswrapper[4714]: I0129 16:15:29.025296 4714 scope.go:117] "RemoveContainer" containerID="3e427856d41457853f68d5a32dc2f87168c0f52da592bc0a2b3db34f710fea2b" Jan 29 16:15:29 crc kubenswrapper[4714]: E0129 16:15:29.028661 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e427856d41457853f68d5a32dc2f87168c0f52da592bc0a2b3db34f710fea2b\": container with ID starting with 3e427856d41457853f68d5a32dc2f87168c0f52da592bc0a2b3db34f710fea2b not found: ID does not exist" containerID="3e427856d41457853f68d5a32dc2f87168c0f52da592bc0a2b3db34f710fea2b" Jan 29 16:15:29 crc kubenswrapper[4714]: I0129 16:15:29.028719 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e427856d41457853f68d5a32dc2f87168c0f52da592bc0a2b3db34f710fea2b"} err="failed to get container status \"3e427856d41457853f68d5a32dc2f87168c0f52da592bc0a2b3db34f710fea2b\": rpc error: code = NotFound desc = could not find container \"3e427856d41457853f68d5a32dc2f87168c0f52da592bc0a2b3db34f710fea2b\": container with ID starting with 3e427856d41457853f68d5a32dc2f87168c0f52da592bc0a2b3db34f710fea2b not found: ID does not exist" Jan 29 16:15:29 crc kubenswrapper[4714]: I0129 16:15:29.034879 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw"] Jan 29 16:15:29 crc kubenswrapper[4714]: I0129 16:15:29.043272 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m2qxw"] Jan 29 16:15:29 crc kubenswrapper[4714]: I0129 16:15:29.046584 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xlczd"] Jan 29 16:15:29 crc kubenswrapper[4714]: I0129 16:15:29.050207 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xlczd"] Jan 29 16:15:29 crc kubenswrapper[4714]: I0129 16:15:29.997180 4714 generic.go:334] "Generic (PLEG): container finished" podID="d0d972c0-6998-401f-8f0a-5bea6ed5590f" containerID="4b672d789aed1cc6acdb183556fd62ebd76f645741d01b1b70856eabac3f1f5f" exitCode=0 Jan 29 16:15:29 crc kubenswrapper[4714]: I0129 16:15:29.997253 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4hxfc" event={"ID":"d0d972c0-6998-401f-8f0a-5bea6ed5590f","Type":"ContainerDied","Data":"4b672d789aed1cc6acdb183556fd62ebd76f645741d01b1b70856eabac3f1f5f"} Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.009865 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b89b4855f-vq5x7"] Jan 29 16:15:30 crc kubenswrapper[4714]: E0129 16:15:30.010221 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbfdd647-1d64-4d35-9af2-6dee52b4c860" containerName="route-controller-manager" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.010249 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbfdd647-1d64-4d35-9af2-6dee52b4c860" containerName="route-controller-manager" Jan 29 16:15:30 crc kubenswrapper[4714]: E0129 16:15:30.010278 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c2d0611-58f8-4a7e-8280-361c80d62802" containerName="controller-manager" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.010291 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c2d0611-58f8-4a7e-8280-361c80d62802" containerName="controller-manager" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.010511 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c2d0611-58f8-4a7e-8280-361c80d62802" containerName="controller-manager" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.010542 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbfdd647-1d64-4d35-9af2-6dee52b4c860" containerName="route-controller-manager" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.011127 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b89b4855f-vq5x7" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.014588 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.015650 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.016204 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.016365 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.016658 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.017178 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td"] Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.018357 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.019092 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.021722 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.022159 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.022823 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.023258 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.023312 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.025496 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.028845 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b89b4855f-vq5x7"] Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.031525 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.043954 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-866ps\" (UniqueName: \"kubernetes.io/projected/5304db45-6305-4176-a472-ce79c6a873bc-kube-api-access-866ps\") pod \"controller-manager-6b89b4855f-vq5x7\" (UID: \"5304db45-6305-4176-a472-ce79c6a873bc\") " pod="openshift-controller-manager/controller-manager-6b89b4855f-vq5x7" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.044006 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5304db45-6305-4176-a472-ce79c6a873bc-serving-cert\") pod \"controller-manager-6b89b4855f-vq5x7\" (UID: \"5304db45-6305-4176-a472-ce79c6a873bc\") " pod="openshift-controller-manager/controller-manager-6b89b4855f-vq5x7" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.044038 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5304db45-6305-4176-a472-ce79c6a873bc-proxy-ca-bundles\") pod \"controller-manager-6b89b4855f-vq5x7\" (UID: \"5304db45-6305-4176-a472-ce79c6a873bc\") " pod="openshift-controller-manager/controller-manager-6b89b4855f-vq5x7" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.046958 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5304db45-6305-4176-a472-ce79c6a873bc-client-ca\") pod \"controller-manager-6b89b4855f-vq5x7\" (UID: \"5304db45-6305-4176-a472-ce79c6a873bc\") " pod="openshift-controller-manager/controller-manager-6b89b4855f-vq5x7" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.047046 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5304db45-6305-4176-a472-ce79c6a873bc-config\") pod \"controller-manager-6b89b4855f-vq5x7\" (UID: \"5304db45-6305-4176-a472-ce79c6a873bc\") " pod="openshift-controller-manager/controller-manager-6b89b4855f-vq5x7" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.053960 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td"] Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.148565 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5304db45-6305-4176-a472-ce79c6a873bc-client-ca\") pod \"controller-manager-6b89b4855f-vq5x7\" (UID: \"5304db45-6305-4176-a472-ce79c6a873bc\") " pod="openshift-controller-manager/controller-manager-6b89b4855f-vq5x7" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.148625 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99c8ea80-a927-49e6-96fb-40c16f486883-serving-cert\") pod \"route-controller-manager-5ccd555bf-xt5td\" (UID: \"99c8ea80-a927-49e6-96fb-40c16f486883\") " pod="openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.148653 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5304db45-6305-4176-a472-ce79c6a873bc-config\") pod \"controller-manager-6b89b4855f-vq5x7\" (UID: \"5304db45-6305-4176-a472-ce79c6a873bc\") " pod="openshift-controller-manager/controller-manager-6b89b4855f-vq5x7" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.148670 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99c8ea80-a927-49e6-96fb-40c16f486883-client-ca\") pod \"route-controller-manager-5ccd555bf-xt5td\" (UID: \"99c8ea80-a927-49e6-96fb-40c16f486883\") " pod="openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.148732 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk544\" (UniqueName: \"kubernetes.io/projected/99c8ea80-a927-49e6-96fb-40c16f486883-kube-api-access-gk544\") pod \"route-controller-manager-5ccd555bf-xt5td\" (UID: \"99c8ea80-a927-49e6-96fb-40c16f486883\") " pod="openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.148762 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-866ps\" (UniqueName: \"kubernetes.io/projected/5304db45-6305-4176-a472-ce79c6a873bc-kube-api-access-866ps\") pod \"controller-manager-6b89b4855f-vq5x7\" (UID: \"5304db45-6305-4176-a472-ce79c6a873bc\") " pod="openshift-controller-manager/controller-manager-6b89b4855f-vq5x7" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.148778 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5304db45-6305-4176-a472-ce79c6a873bc-serving-cert\") pod \"controller-manager-6b89b4855f-vq5x7\" (UID: \"5304db45-6305-4176-a472-ce79c6a873bc\") " pod="openshift-controller-manager/controller-manager-6b89b4855f-vq5x7" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.148791 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5304db45-6305-4176-a472-ce79c6a873bc-proxy-ca-bundles\") pod \"controller-manager-6b89b4855f-vq5x7\" (UID: \"5304db45-6305-4176-a472-ce79c6a873bc\") " pod="openshift-controller-manager/controller-manager-6b89b4855f-vq5x7" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.148812 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99c8ea80-a927-49e6-96fb-40c16f486883-config\") pod \"route-controller-manager-5ccd555bf-xt5td\" (UID: \"99c8ea80-a927-49e6-96fb-40c16f486883\") " pod="openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.149837 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5304db45-6305-4176-a472-ce79c6a873bc-client-ca\") pod \"controller-manager-6b89b4855f-vq5x7\" (UID: \"5304db45-6305-4176-a472-ce79c6a873bc\") " pod="openshift-controller-manager/controller-manager-6b89b4855f-vq5x7" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.149971 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5304db45-6305-4176-a472-ce79c6a873bc-proxy-ca-bundles\") pod \"controller-manager-6b89b4855f-vq5x7\" (UID: \"5304db45-6305-4176-a472-ce79c6a873bc\") " pod="openshift-controller-manager/controller-manager-6b89b4855f-vq5x7" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.151130 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5304db45-6305-4176-a472-ce79c6a873bc-config\") pod \"controller-manager-6b89b4855f-vq5x7\" (UID: \"5304db45-6305-4176-a472-ce79c6a873bc\") " pod="openshift-controller-manager/controller-manager-6b89b4855f-vq5x7" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.160007 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5304db45-6305-4176-a472-ce79c6a873bc-serving-cert\") pod \"controller-manager-6b89b4855f-vq5x7\" (UID: \"5304db45-6305-4176-a472-ce79c6a873bc\") " pod="openshift-controller-manager/controller-manager-6b89b4855f-vq5x7" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.166430 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-866ps\" (UniqueName: \"kubernetes.io/projected/5304db45-6305-4176-a472-ce79c6a873bc-kube-api-access-866ps\") pod \"controller-manager-6b89b4855f-vq5x7\" (UID: \"5304db45-6305-4176-a472-ce79c6a873bc\") " pod="openshift-controller-manager/controller-manager-6b89b4855f-vq5x7" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.191578 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c2d0611-58f8-4a7e-8280-361c80d62802" path="/var/lib/kubelet/pods/3c2d0611-58f8-4a7e-8280-361c80d62802/volumes" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.192678 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbfdd647-1d64-4d35-9af2-6dee52b4c860" path="/var/lib/kubelet/pods/fbfdd647-1d64-4d35-9af2-6dee52b4c860/volumes" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.249874 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk544\" (UniqueName: \"kubernetes.io/projected/99c8ea80-a927-49e6-96fb-40c16f486883-kube-api-access-gk544\") pod \"route-controller-manager-5ccd555bf-xt5td\" (UID: \"99c8ea80-a927-49e6-96fb-40c16f486883\") " pod="openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.249964 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99c8ea80-a927-49e6-96fb-40c16f486883-config\") pod \"route-controller-manager-5ccd555bf-xt5td\" (UID: \"99c8ea80-a927-49e6-96fb-40c16f486883\") " pod="openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.250020 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99c8ea80-a927-49e6-96fb-40c16f486883-serving-cert\") pod \"route-controller-manager-5ccd555bf-xt5td\" (UID: \"99c8ea80-a927-49e6-96fb-40c16f486883\") " pod="openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.250051 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99c8ea80-a927-49e6-96fb-40c16f486883-client-ca\") pod \"route-controller-manager-5ccd555bf-xt5td\" (UID: \"99c8ea80-a927-49e6-96fb-40c16f486883\") " pod="openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.251039 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99c8ea80-a927-49e6-96fb-40c16f486883-client-ca\") pod \"route-controller-manager-5ccd555bf-xt5td\" (UID: \"99c8ea80-a927-49e6-96fb-40c16f486883\") " pod="openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.251271 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99c8ea80-a927-49e6-96fb-40c16f486883-config\") pod \"route-controller-manager-5ccd555bf-xt5td\" (UID: \"99c8ea80-a927-49e6-96fb-40c16f486883\") " pod="openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.253574 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99c8ea80-a927-49e6-96fb-40c16f486883-serving-cert\") pod \"route-controller-manager-5ccd555bf-xt5td\" (UID: \"99c8ea80-a927-49e6-96fb-40c16f486883\") " pod="openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.264921 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk544\" (UniqueName: \"kubernetes.io/projected/99c8ea80-a927-49e6-96fb-40c16f486883-kube-api-access-gk544\") pod \"route-controller-manager-5ccd555bf-xt5td\" (UID: \"99c8ea80-a927-49e6-96fb-40c16f486883\") " pod="openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.341703 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b89b4855f-vq5x7" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.364008 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td" Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.792254 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td"] Jan 29 16:15:30 crc kubenswrapper[4714]: I0129 16:15:30.803098 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b89b4855f-vq5x7"] Jan 29 16:15:31 crc kubenswrapper[4714]: I0129 16:15:31.009736 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td" event={"ID":"99c8ea80-a927-49e6-96fb-40c16f486883","Type":"ContainerStarted","Data":"d19fbbf2f1739d674126fe1890b29729a713b61f7c1394a35442defb39bc3f82"} Jan 29 16:15:31 crc kubenswrapper[4714]: I0129 16:15:31.009783 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td" event={"ID":"99c8ea80-a927-49e6-96fb-40c16f486883","Type":"ContainerStarted","Data":"af0e31a2974ff0d8ba2387e7344483b384496c48296610f8600e9690dddb022f"} Jan 29 16:15:31 crc kubenswrapper[4714]: I0129 16:15:31.010022 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td" Jan 29 16:15:31 crc kubenswrapper[4714]: I0129 16:15:31.014000 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b89b4855f-vq5x7" event={"ID":"5304db45-6305-4176-a472-ce79c6a873bc","Type":"ContainerStarted","Data":"68cf4b129bb68130c5704ef391f7af99e04fab51cea46cd69c6b56c7ab855d3d"} Jan 29 16:15:31 crc kubenswrapper[4714]: I0129 16:15:31.014074 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b89b4855f-vq5x7" event={"ID":"5304db45-6305-4176-a472-ce79c6a873bc","Type":"ContainerStarted","Data":"51a48632aa0b57e7418c6d7638342a8c28a8562d8cca769a5ff82f52cb8a2256"} Jan 29 16:15:31 crc kubenswrapper[4714]: I0129 16:15:31.014261 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6b89b4855f-vq5x7" Jan 29 16:15:31 crc kubenswrapper[4714]: I0129 16:15:31.015179 4714 patch_prober.go:28] interesting pod/controller-manager-6b89b4855f-vq5x7 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Jan 29 16:15:31 crc kubenswrapper[4714]: I0129 16:15:31.015227 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6b89b4855f-vq5x7" podUID="5304db45-6305-4176-a472-ce79c6a873bc" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Jan 29 16:15:31 crc kubenswrapper[4714]: I0129 16:15:31.026766 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td" podStartSLOduration=3.026747259 podStartE2EDuration="3.026747259s" podCreationTimestamp="2026-01-29 16:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:15:31.025198095 +0000 UTC m=+337.545699215" watchObservedRunningTime="2026-01-29 16:15:31.026747259 +0000 UTC m=+337.547248379" Jan 29 16:15:31 crc kubenswrapper[4714]: I0129 16:15:31.052283 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6b89b4855f-vq5x7" podStartSLOduration=3.052259729 podStartE2EDuration="3.052259729s" podCreationTimestamp="2026-01-29 16:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:15:31.046245429 +0000 UTC m=+337.566746549" watchObservedRunningTime="2026-01-29 16:15:31.052259729 +0000 UTC m=+337.572760849" Jan 29 16:15:31 crc kubenswrapper[4714]: I0129 16:15:31.349197 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td" Jan 29 16:15:31 crc kubenswrapper[4714]: I0129 16:15:31.365089 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4hxfc" Jan 29 16:15:31 crc kubenswrapper[4714]: I0129 16:15:31.468702 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0d972c0-6998-401f-8f0a-5bea6ed5590f-secret-volume\") pod \"d0d972c0-6998-401f-8f0a-5bea6ed5590f\" (UID: \"d0d972c0-6998-401f-8f0a-5bea6ed5590f\") " Jan 29 16:15:31 crc kubenswrapper[4714]: I0129 16:15:31.468777 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjwmg\" (UniqueName: \"kubernetes.io/projected/d0d972c0-6998-401f-8f0a-5bea6ed5590f-kube-api-access-sjwmg\") pod \"d0d972c0-6998-401f-8f0a-5bea6ed5590f\" (UID: \"d0d972c0-6998-401f-8f0a-5bea6ed5590f\") " Jan 29 16:15:31 crc kubenswrapper[4714]: I0129 16:15:31.468824 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0d972c0-6998-401f-8f0a-5bea6ed5590f-config-volume\") pod \"d0d972c0-6998-401f-8f0a-5bea6ed5590f\" (UID: \"d0d972c0-6998-401f-8f0a-5bea6ed5590f\") " Jan 29 16:15:31 crc kubenswrapper[4714]: I0129 16:15:31.469439 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0d972c0-6998-401f-8f0a-5bea6ed5590f-config-volume" (OuterVolumeSpecName: "config-volume") pod "d0d972c0-6998-401f-8f0a-5bea6ed5590f" (UID: "d0d972c0-6998-401f-8f0a-5bea6ed5590f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:15:31 crc kubenswrapper[4714]: I0129 16:15:31.473714 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0d972c0-6998-401f-8f0a-5bea6ed5590f-kube-api-access-sjwmg" (OuterVolumeSpecName: "kube-api-access-sjwmg") pod "d0d972c0-6998-401f-8f0a-5bea6ed5590f" (UID: "d0d972c0-6998-401f-8f0a-5bea6ed5590f"). InnerVolumeSpecName "kube-api-access-sjwmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:15:31 crc kubenswrapper[4714]: I0129 16:15:31.477094 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0d972c0-6998-401f-8f0a-5bea6ed5590f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d0d972c0-6998-401f-8f0a-5bea6ed5590f" (UID: "d0d972c0-6998-401f-8f0a-5bea6ed5590f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:15:31 crc kubenswrapper[4714]: I0129 16:15:31.570380 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjwmg\" (UniqueName: \"kubernetes.io/projected/d0d972c0-6998-401f-8f0a-5bea6ed5590f-kube-api-access-sjwmg\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:31 crc kubenswrapper[4714]: I0129 16:15:31.570415 4714 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0d972c0-6998-401f-8f0a-5bea6ed5590f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:31 crc kubenswrapper[4714]: I0129 16:15:31.570424 4714 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0d972c0-6998-401f-8f0a-5bea6ed5590f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:32 crc kubenswrapper[4714]: I0129 16:15:32.023892 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4hxfc" event={"ID":"d0d972c0-6998-401f-8f0a-5bea6ed5590f","Type":"ContainerDied","Data":"b6b3b048229369338cb18b65251b8d1da3ff36603f5a4de17d64b736eefdc448"} Jan 29 16:15:32 crc kubenswrapper[4714]: I0129 16:15:32.024004 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6b3b048229369338cb18b65251b8d1da3ff36603f5a4de17d64b736eefdc448" Jan 29 16:15:32 crc kubenswrapper[4714]: I0129 16:15:32.024050 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4hxfc" Jan 29 16:15:32 crc kubenswrapper[4714]: I0129 16:15:32.028373 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6b89b4855f-vq5x7" Jan 29 16:15:57 crc kubenswrapper[4714]: I0129 16:15:57.844761 4714 patch_prober.go:28] interesting pod/machine-config-daemon-ppngk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:15:57 crc kubenswrapper[4714]: I0129 16:15:57.845311 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:16:03 crc kubenswrapper[4714]: I0129 16:16:03.619896 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6bjgq"] Jan 29 16:16:03 crc kubenswrapper[4714]: I0129 16:16:03.622497 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6bjgq" podUID="98a35d03-ef3b-4341-9866-56d12a28aee3" containerName="registry-server" containerID="cri-o://7d09f4f35a658993558ef15a141e74b57af657ad6538fc1eecadfe107e507595" gracePeriod=2 Jan 29 16:16:04 crc kubenswrapper[4714]: I0129 16:16:04.058716 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6bjgq" Jan 29 16:16:04 crc kubenswrapper[4714]: I0129 16:16:04.166083 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98a35d03-ef3b-4341-9866-56d12a28aee3-catalog-content\") pod \"98a35d03-ef3b-4341-9866-56d12a28aee3\" (UID: \"98a35d03-ef3b-4341-9866-56d12a28aee3\") " Jan 29 16:16:04 crc kubenswrapper[4714]: I0129 16:16:04.166169 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98a35d03-ef3b-4341-9866-56d12a28aee3-utilities\") pod \"98a35d03-ef3b-4341-9866-56d12a28aee3\" (UID: \"98a35d03-ef3b-4341-9866-56d12a28aee3\") " Jan 29 16:16:04 crc kubenswrapper[4714]: I0129 16:16:04.166314 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gks4z\" (UniqueName: \"kubernetes.io/projected/98a35d03-ef3b-4341-9866-56d12a28aee3-kube-api-access-gks4z\") pod \"98a35d03-ef3b-4341-9866-56d12a28aee3\" (UID: \"98a35d03-ef3b-4341-9866-56d12a28aee3\") " Jan 29 16:16:04 crc kubenswrapper[4714]: I0129 16:16:04.167321 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98a35d03-ef3b-4341-9866-56d12a28aee3-utilities" (OuterVolumeSpecName: "utilities") pod "98a35d03-ef3b-4341-9866-56d12a28aee3" (UID: "98a35d03-ef3b-4341-9866-56d12a28aee3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:16:04 crc kubenswrapper[4714]: I0129 16:16:04.173369 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98a35d03-ef3b-4341-9866-56d12a28aee3-kube-api-access-gks4z" (OuterVolumeSpecName: "kube-api-access-gks4z") pod "98a35d03-ef3b-4341-9866-56d12a28aee3" (UID: "98a35d03-ef3b-4341-9866-56d12a28aee3"). InnerVolumeSpecName "kube-api-access-gks4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:16:04 crc kubenswrapper[4714]: I0129 16:16:04.221534 4714 generic.go:334] "Generic (PLEG): container finished" podID="98a35d03-ef3b-4341-9866-56d12a28aee3" containerID="7d09f4f35a658993558ef15a141e74b57af657ad6538fc1eecadfe107e507595" exitCode=0 Jan 29 16:16:04 crc kubenswrapper[4714]: I0129 16:16:04.221581 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6bjgq" Jan 29 16:16:04 crc kubenswrapper[4714]: I0129 16:16:04.221603 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6bjgq" event={"ID":"98a35d03-ef3b-4341-9866-56d12a28aee3","Type":"ContainerDied","Data":"7d09f4f35a658993558ef15a141e74b57af657ad6538fc1eecadfe107e507595"} Jan 29 16:16:04 crc kubenswrapper[4714]: I0129 16:16:04.221781 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6bjgq" event={"ID":"98a35d03-ef3b-4341-9866-56d12a28aee3","Type":"ContainerDied","Data":"d38f58d434dcb4497833c894d88b3ceb4be10c7a4d69f2a5403bda7aa069a88c"} Jan 29 16:16:04 crc kubenswrapper[4714]: I0129 16:16:04.221831 4714 scope.go:117] "RemoveContainer" containerID="7d09f4f35a658993558ef15a141e74b57af657ad6538fc1eecadfe107e507595" Jan 29 16:16:04 crc kubenswrapper[4714]: I0129 16:16:04.236190 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98a35d03-ef3b-4341-9866-56d12a28aee3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98a35d03-ef3b-4341-9866-56d12a28aee3" (UID: "98a35d03-ef3b-4341-9866-56d12a28aee3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:16:04 crc kubenswrapper[4714]: I0129 16:16:04.245042 4714 scope.go:117] "RemoveContainer" containerID="aaa814f95896ac1eaedc7b86e2197c8140345f5296d9cb5ecb6a3dac9cee5ab6" Jan 29 16:16:04 crc kubenswrapper[4714]: I0129 16:16:04.268307 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gks4z\" (UniqueName: \"kubernetes.io/projected/98a35d03-ef3b-4341-9866-56d12a28aee3-kube-api-access-gks4z\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:04 crc kubenswrapper[4714]: I0129 16:16:04.268343 4714 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98a35d03-ef3b-4341-9866-56d12a28aee3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:04 crc kubenswrapper[4714]: I0129 16:16:04.268355 4714 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98a35d03-ef3b-4341-9866-56d12a28aee3-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:04 crc kubenswrapper[4714]: I0129 16:16:04.271405 4714 scope.go:117] "RemoveContainer" containerID="7d5da21ddc846f7484a829e4e1b7d4f27ddad6196e5ccbce162fd0a2651869dd" Jan 29 16:16:04 crc kubenswrapper[4714]: I0129 16:16:04.290816 4714 scope.go:117] "RemoveContainer" containerID="7d09f4f35a658993558ef15a141e74b57af657ad6538fc1eecadfe107e507595" Jan 29 16:16:04 crc kubenswrapper[4714]: E0129 16:16:04.291371 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d09f4f35a658993558ef15a141e74b57af657ad6538fc1eecadfe107e507595\": container with ID starting with 7d09f4f35a658993558ef15a141e74b57af657ad6538fc1eecadfe107e507595 not found: ID does not exist" containerID="7d09f4f35a658993558ef15a141e74b57af657ad6538fc1eecadfe107e507595" Jan 29 16:16:04 crc kubenswrapper[4714]: I0129 16:16:04.291429 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d09f4f35a658993558ef15a141e74b57af657ad6538fc1eecadfe107e507595"} err="failed to get container status \"7d09f4f35a658993558ef15a141e74b57af657ad6538fc1eecadfe107e507595\": rpc error: code = NotFound desc = could not find container \"7d09f4f35a658993558ef15a141e74b57af657ad6538fc1eecadfe107e507595\": container with ID starting with 7d09f4f35a658993558ef15a141e74b57af657ad6538fc1eecadfe107e507595 not found: ID does not exist" Jan 29 16:16:04 crc kubenswrapper[4714]: I0129 16:16:04.291482 4714 scope.go:117] "RemoveContainer" containerID="aaa814f95896ac1eaedc7b86e2197c8140345f5296d9cb5ecb6a3dac9cee5ab6" Jan 29 16:16:04 crc kubenswrapper[4714]: E0129 16:16:04.291897 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaa814f95896ac1eaedc7b86e2197c8140345f5296d9cb5ecb6a3dac9cee5ab6\": container with ID starting with aaa814f95896ac1eaedc7b86e2197c8140345f5296d9cb5ecb6a3dac9cee5ab6 not found: ID does not exist" containerID="aaa814f95896ac1eaedc7b86e2197c8140345f5296d9cb5ecb6a3dac9cee5ab6" Jan 29 16:16:04 crc kubenswrapper[4714]: I0129 16:16:04.291991 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaa814f95896ac1eaedc7b86e2197c8140345f5296d9cb5ecb6a3dac9cee5ab6"} err="failed to get container status \"aaa814f95896ac1eaedc7b86e2197c8140345f5296d9cb5ecb6a3dac9cee5ab6\": rpc error: code = NotFound desc = could not find container \"aaa814f95896ac1eaedc7b86e2197c8140345f5296d9cb5ecb6a3dac9cee5ab6\": container with ID starting with aaa814f95896ac1eaedc7b86e2197c8140345f5296d9cb5ecb6a3dac9cee5ab6 not found: ID does not exist" Jan 29 16:16:04 crc kubenswrapper[4714]: I0129 16:16:04.292030 4714 scope.go:117] "RemoveContainer" containerID="7d5da21ddc846f7484a829e4e1b7d4f27ddad6196e5ccbce162fd0a2651869dd" Jan 29 16:16:04 crc kubenswrapper[4714]: E0129 16:16:04.292353 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d5da21ddc846f7484a829e4e1b7d4f27ddad6196e5ccbce162fd0a2651869dd\": container with ID starting with 7d5da21ddc846f7484a829e4e1b7d4f27ddad6196e5ccbce162fd0a2651869dd not found: ID does not exist" containerID="7d5da21ddc846f7484a829e4e1b7d4f27ddad6196e5ccbce162fd0a2651869dd" Jan 29 16:16:04 crc kubenswrapper[4714]: I0129 16:16:04.292386 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d5da21ddc846f7484a829e4e1b7d4f27ddad6196e5ccbce162fd0a2651869dd"} err="failed to get container status \"7d5da21ddc846f7484a829e4e1b7d4f27ddad6196e5ccbce162fd0a2651869dd\": rpc error: code = NotFound desc = could not find container \"7d5da21ddc846f7484a829e4e1b7d4f27ddad6196e5ccbce162fd0a2651869dd\": container with ID starting with 7d5da21ddc846f7484a829e4e1b7d4f27ddad6196e5ccbce162fd0a2651869dd not found: ID does not exist" Jan 29 16:16:04 crc kubenswrapper[4714]: I0129 16:16:04.553337 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6bjgq"] Jan 29 16:16:04 crc kubenswrapper[4714]: I0129 16:16:04.557165 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6bjgq"] Jan 29 16:16:06 crc kubenswrapper[4714]: I0129 16:16:06.197026 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98a35d03-ef3b-4341-9866-56d12a28aee3" path="/var/lib/kubelet/pods/98a35d03-ef3b-4341-9866-56d12a28aee3/volumes" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.726024 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xrw9s"] Jan 29 16:16:09 crc kubenswrapper[4714]: E0129 16:16:09.726509 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a35d03-ef3b-4341-9866-56d12a28aee3" containerName="registry-server" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.726524 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a35d03-ef3b-4341-9866-56d12a28aee3" containerName="registry-server" Jan 29 16:16:09 crc kubenswrapper[4714]: E0129 16:16:09.726537 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a35d03-ef3b-4341-9866-56d12a28aee3" containerName="extract-content" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.726545 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a35d03-ef3b-4341-9866-56d12a28aee3" containerName="extract-content" Jan 29 16:16:09 crc kubenswrapper[4714]: E0129 16:16:09.726563 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0d972c0-6998-401f-8f0a-5bea6ed5590f" containerName="collect-profiles" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.726572 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0d972c0-6998-401f-8f0a-5bea6ed5590f" containerName="collect-profiles" Jan 29 16:16:09 crc kubenswrapper[4714]: E0129 16:16:09.726582 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a35d03-ef3b-4341-9866-56d12a28aee3" containerName="extract-utilities" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.726590 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a35d03-ef3b-4341-9866-56d12a28aee3" containerName="extract-utilities" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.726729 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0d972c0-6998-401f-8f0a-5bea6ed5590f" containerName="collect-profiles" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.726748 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a35d03-ef3b-4341-9866-56d12a28aee3" containerName="registry-server" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.727175 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.741470 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xrw9s"] Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.841573 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5f131437-8551-46b5-b9e4-ca30784e8e76-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xrw9s\" (UID: \"5f131437-8551-46b5-b9e4-ca30784e8e76\") " pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.841649 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5f131437-8551-46b5-b9e4-ca30784e8e76-registry-certificates\") pod \"image-registry-66df7c8f76-xrw9s\" (UID: \"5f131437-8551-46b5-b9e4-ca30784e8e76\") " pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.841880 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5f131437-8551-46b5-b9e4-ca30784e8e76-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xrw9s\" (UID: \"5f131437-8551-46b5-b9e4-ca30784e8e76\") " pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.841952 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5f131437-8551-46b5-b9e4-ca30784e8e76-registry-tls\") pod \"image-registry-66df7c8f76-xrw9s\" (UID: \"5f131437-8551-46b5-b9e4-ca30784e8e76\") " pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.842048 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5f131437-8551-46b5-b9e4-ca30784e8e76-trusted-ca\") pod \"image-registry-66df7c8f76-xrw9s\" (UID: \"5f131437-8551-46b5-b9e4-ca30784e8e76\") " pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.842095 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f131437-8551-46b5-b9e4-ca30784e8e76-bound-sa-token\") pod \"image-registry-66df7c8f76-xrw9s\" (UID: \"5f131437-8551-46b5-b9e4-ca30784e8e76\") " pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.842128 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csr5w\" (UniqueName: \"kubernetes.io/projected/5f131437-8551-46b5-b9e4-ca30784e8e76-kube-api-access-csr5w\") pod \"image-registry-66df7c8f76-xrw9s\" (UID: \"5f131437-8551-46b5-b9e4-ca30784e8e76\") " pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.842183 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xrw9s\" (UID: \"5f131437-8551-46b5-b9e4-ca30784e8e76\") " pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.869565 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xrw9s\" (UID: \"5f131437-8551-46b5-b9e4-ca30784e8e76\") " pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.943693 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5f131437-8551-46b5-b9e4-ca30784e8e76-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xrw9s\" (UID: \"5f131437-8551-46b5-b9e4-ca30784e8e76\") " pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.943769 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5f131437-8551-46b5-b9e4-ca30784e8e76-registry-certificates\") pod \"image-registry-66df7c8f76-xrw9s\" (UID: \"5f131437-8551-46b5-b9e4-ca30784e8e76\") " pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.943854 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5f131437-8551-46b5-b9e4-ca30784e8e76-registry-tls\") pod \"image-registry-66df7c8f76-xrw9s\" (UID: \"5f131437-8551-46b5-b9e4-ca30784e8e76\") " pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.943890 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5f131437-8551-46b5-b9e4-ca30784e8e76-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xrw9s\" (UID: \"5f131437-8551-46b5-b9e4-ca30784e8e76\") " pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.943983 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5f131437-8551-46b5-b9e4-ca30784e8e76-trusted-ca\") pod \"image-registry-66df7c8f76-xrw9s\" (UID: \"5f131437-8551-46b5-b9e4-ca30784e8e76\") " pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.944033 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f131437-8551-46b5-b9e4-ca30784e8e76-bound-sa-token\") pod \"image-registry-66df7c8f76-xrw9s\" (UID: \"5f131437-8551-46b5-b9e4-ca30784e8e76\") " pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.944568 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csr5w\" (UniqueName: \"kubernetes.io/projected/5f131437-8551-46b5-b9e4-ca30784e8e76-kube-api-access-csr5w\") pod \"image-registry-66df7c8f76-xrw9s\" (UID: \"5f131437-8551-46b5-b9e4-ca30784e8e76\") " pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.945870 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5f131437-8551-46b5-b9e4-ca30784e8e76-registry-certificates\") pod \"image-registry-66df7c8f76-xrw9s\" (UID: \"5f131437-8551-46b5-b9e4-ca30784e8e76\") " pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.946331 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5f131437-8551-46b5-b9e4-ca30784e8e76-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xrw9s\" (UID: \"5f131437-8551-46b5-b9e4-ca30784e8e76\") " pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.946518 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5f131437-8551-46b5-b9e4-ca30784e8e76-trusted-ca\") pod \"image-registry-66df7c8f76-xrw9s\" (UID: \"5f131437-8551-46b5-b9e4-ca30784e8e76\") " pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.952093 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5f131437-8551-46b5-b9e4-ca30784e8e76-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xrw9s\" (UID: \"5f131437-8551-46b5-b9e4-ca30784e8e76\") " pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.954013 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5f131437-8551-46b5-b9e4-ca30784e8e76-registry-tls\") pod \"image-registry-66df7c8f76-xrw9s\" (UID: \"5f131437-8551-46b5-b9e4-ca30784e8e76\") " pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.965363 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csr5w\" (UniqueName: \"kubernetes.io/projected/5f131437-8551-46b5-b9e4-ca30784e8e76-kube-api-access-csr5w\") pod \"image-registry-66df7c8f76-xrw9s\" (UID: \"5f131437-8551-46b5-b9e4-ca30784e8e76\") " pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" Jan 29 16:16:09 crc kubenswrapper[4714]: I0129 16:16:09.965642 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f131437-8551-46b5-b9e4-ca30784e8e76-bound-sa-token\") pod \"image-registry-66df7c8f76-xrw9s\" (UID: \"5f131437-8551-46b5-b9e4-ca30784e8e76\") " pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" Jan 29 16:16:10 crc kubenswrapper[4714]: I0129 16:16:10.047268 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" Jan 29 16:16:10 crc kubenswrapper[4714]: I0129 16:16:10.258443 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td"] Jan 29 16:16:10 crc kubenswrapper[4714]: I0129 16:16:10.263669 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td" podUID="99c8ea80-a927-49e6-96fb-40c16f486883" containerName="route-controller-manager" containerID="cri-o://d19fbbf2f1739d674126fe1890b29729a713b61f7c1394a35442defb39bc3f82" gracePeriod=30 Jan 29 16:16:10 crc kubenswrapper[4714]: I0129 16:16:10.347794 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xrw9s"] Jan 29 16:16:10 crc kubenswrapper[4714]: I0129 16:16:10.365306 4714 patch_prober.go:28] interesting pod/route-controller-manager-5ccd555bf-xt5td container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Jan 29 16:16:10 crc kubenswrapper[4714]: I0129 16:16:10.365356 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td" podUID="99c8ea80-a927-49e6-96fb-40c16f486883" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Jan 29 16:16:10 crc kubenswrapper[4714]: I0129 16:16:10.601802 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td" Jan 29 16:16:10 crc kubenswrapper[4714]: I0129 16:16:10.756799 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99c8ea80-a927-49e6-96fb-40c16f486883-client-ca\") pod \"99c8ea80-a927-49e6-96fb-40c16f486883\" (UID: \"99c8ea80-a927-49e6-96fb-40c16f486883\") " Jan 29 16:16:10 crc kubenswrapper[4714]: I0129 16:16:10.756873 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99c8ea80-a927-49e6-96fb-40c16f486883-config\") pod \"99c8ea80-a927-49e6-96fb-40c16f486883\" (UID: \"99c8ea80-a927-49e6-96fb-40c16f486883\") " Jan 29 16:16:10 crc kubenswrapper[4714]: I0129 16:16:10.757018 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gk544\" (UniqueName: \"kubernetes.io/projected/99c8ea80-a927-49e6-96fb-40c16f486883-kube-api-access-gk544\") pod \"99c8ea80-a927-49e6-96fb-40c16f486883\" (UID: \"99c8ea80-a927-49e6-96fb-40c16f486883\") " Jan 29 16:16:10 crc kubenswrapper[4714]: I0129 16:16:10.757178 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99c8ea80-a927-49e6-96fb-40c16f486883-serving-cert\") pod \"99c8ea80-a927-49e6-96fb-40c16f486883\" (UID: \"99c8ea80-a927-49e6-96fb-40c16f486883\") " Jan 29 16:16:10 crc kubenswrapper[4714]: I0129 16:16:10.757623 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99c8ea80-a927-49e6-96fb-40c16f486883-config" (OuterVolumeSpecName: "config") pod "99c8ea80-a927-49e6-96fb-40c16f486883" (UID: "99c8ea80-a927-49e6-96fb-40c16f486883"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:16:10 crc kubenswrapper[4714]: I0129 16:16:10.757621 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99c8ea80-a927-49e6-96fb-40c16f486883-client-ca" (OuterVolumeSpecName: "client-ca") pod "99c8ea80-a927-49e6-96fb-40c16f486883" (UID: "99c8ea80-a927-49e6-96fb-40c16f486883"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:16:10 crc kubenswrapper[4714]: I0129 16:16:10.758541 4714 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99c8ea80-a927-49e6-96fb-40c16f486883-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:10 crc kubenswrapper[4714]: I0129 16:16:10.758580 4714 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99c8ea80-a927-49e6-96fb-40c16f486883-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:10 crc kubenswrapper[4714]: I0129 16:16:10.761466 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99c8ea80-a927-49e6-96fb-40c16f486883-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "99c8ea80-a927-49e6-96fb-40c16f486883" (UID: "99c8ea80-a927-49e6-96fb-40c16f486883"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:16:10 crc kubenswrapper[4714]: I0129 16:16:10.761515 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99c8ea80-a927-49e6-96fb-40c16f486883-kube-api-access-gk544" (OuterVolumeSpecName: "kube-api-access-gk544") pod "99c8ea80-a927-49e6-96fb-40c16f486883" (UID: "99c8ea80-a927-49e6-96fb-40c16f486883"). InnerVolumeSpecName "kube-api-access-gk544". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:16:10 crc kubenswrapper[4714]: I0129 16:16:10.859495 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gk544\" (UniqueName: \"kubernetes.io/projected/99c8ea80-a927-49e6-96fb-40c16f486883-kube-api-access-gk544\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:10 crc kubenswrapper[4714]: I0129 16:16:10.859529 4714 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99c8ea80-a927-49e6-96fb-40c16f486883-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:11 crc kubenswrapper[4714]: I0129 16:16:11.283810 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" event={"ID":"5f131437-8551-46b5-b9e4-ca30784e8e76","Type":"ContainerStarted","Data":"51f4e55276037a86c65e6532c8fb92ea992ee6083c96ae7e2613baadc4ab112e"} Jan 29 16:16:11 crc kubenswrapper[4714]: I0129 16:16:11.284281 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" Jan 29 16:16:11 crc kubenswrapper[4714]: I0129 16:16:11.284344 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" event={"ID":"5f131437-8551-46b5-b9e4-ca30784e8e76","Type":"ContainerStarted","Data":"208ee0164257d5ce848beee5714cbcaa9c52741aa6c8b5e3e845659b1c10dd3c"} Jan 29 16:16:11 crc kubenswrapper[4714]: I0129 16:16:11.285390 4714 generic.go:334] "Generic (PLEG): container finished" podID="99c8ea80-a927-49e6-96fb-40c16f486883" containerID="d19fbbf2f1739d674126fe1890b29729a713b61f7c1394a35442defb39bc3f82" exitCode=0 Jan 29 16:16:11 crc kubenswrapper[4714]: I0129 16:16:11.285442 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td" event={"ID":"99c8ea80-a927-49e6-96fb-40c16f486883","Type":"ContainerDied","Data":"d19fbbf2f1739d674126fe1890b29729a713b61f7c1394a35442defb39bc3f82"} Jan 29 16:16:11 crc kubenswrapper[4714]: I0129 16:16:11.285475 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td" Jan 29 16:16:11 crc kubenswrapper[4714]: I0129 16:16:11.285504 4714 scope.go:117] "RemoveContainer" containerID="d19fbbf2f1739d674126fe1890b29729a713b61f7c1394a35442defb39bc3f82" Jan 29 16:16:11 crc kubenswrapper[4714]: I0129 16:16:11.285486 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td" event={"ID":"99c8ea80-a927-49e6-96fb-40c16f486883","Type":"ContainerDied","Data":"af0e31a2974ff0d8ba2387e7344483b384496c48296610f8600e9690dddb022f"} Jan 29 16:16:11 crc kubenswrapper[4714]: I0129 16:16:11.304703 4714 scope.go:117] "RemoveContainer" containerID="d19fbbf2f1739d674126fe1890b29729a713b61f7c1394a35442defb39bc3f82" Jan 29 16:16:11 crc kubenswrapper[4714]: E0129 16:16:11.305210 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d19fbbf2f1739d674126fe1890b29729a713b61f7c1394a35442defb39bc3f82\": container with ID starting with d19fbbf2f1739d674126fe1890b29729a713b61f7c1394a35442defb39bc3f82 not found: ID does not exist" containerID="d19fbbf2f1739d674126fe1890b29729a713b61f7c1394a35442defb39bc3f82" Jan 29 16:16:11 crc kubenswrapper[4714]: I0129 16:16:11.305295 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d19fbbf2f1739d674126fe1890b29729a713b61f7c1394a35442defb39bc3f82"} err="failed to get container status \"d19fbbf2f1739d674126fe1890b29729a713b61f7c1394a35442defb39bc3f82\": rpc error: code = NotFound desc = could not find container \"d19fbbf2f1739d674126fe1890b29729a713b61f7c1394a35442defb39bc3f82\": container with ID starting with d19fbbf2f1739d674126fe1890b29729a713b61f7c1394a35442defb39bc3f82 not found: ID does not exist" Jan 29 16:16:11 crc kubenswrapper[4714]: I0129 16:16:11.312982 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" podStartSLOduration=2.31296472 podStartE2EDuration="2.31296472s" podCreationTimestamp="2026-01-29 16:16:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:16:11.308521404 +0000 UTC m=+377.829022524" watchObservedRunningTime="2026-01-29 16:16:11.31296472 +0000 UTC m=+377.833465840" Jan 29 16:16:11 crc kubenswrapper[4714]: I0129 16:16:11.334602 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td"] Jan 29 16:16:11 crc kubenswrapper[4714]: I0129 16:16:11.341264 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ccd555bf-xt5td"] Jan 29 16:16:12 crc kubenswrapper[4714]: I0129 16:16:12.051373 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c6f6c447c-4rzrl"] Jan 29 16:16:12 crc kubenswrapper[4714]: E0129 16:16:12.051635 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99c8ea80-a927-49e6-96fb-40c16f486883" containerName="route-controller-manager" Jan 29 16:16:12 crc kubenswrapper[4714]: I0129 16:16:12.051651 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="99c8ea80-a927-49e6-96fb-40c16f486883" containerName="route-controller-manager" Jan 29 16:16:12 crc kubenswrapper[4714]: I0129 16:16:12.051769 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="99c8ea80-a927-49e6-96fb-40c16f486883" containerName="route-controller-manager" Jan 29 16:16:12 crc kubenswrapper[4714]: I0129 16:16:12.052277 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c6f6c447c-4rzrl" Jan 29 16:16:12 crc kubenswrapper[4714]: I0129 16:16:12.054339 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 16:16:12 crc kubenswrapper[4714]: I0129 16:16:12.054870 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 16:16:12 crc kubenswrapper[4714]: I0129 16:16:12.054876 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 16:16:12 crc kubenswrapper[4714]: I0129 16:16:12.054976 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 16:16:12 crc kubenswrapper[4714]: I0129 16:16:12.055290 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 16:16:12 crc kubenswrapper[4714]: I0129 16:16:12.056292 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 16:16:12 crc kubenswrapper[4714]: I0129 16:16:12.065264 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c6f6c447c-4rzrl"] Jan 29 16:16:12 crc kubenswrapper[4714]: I0129 16:16:12.179399 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw9mv\" (UniqueName: \"kubernetes.io/projected/cd27896d-ddee-4374-ad3b-dca8f6f7f5d4-kube-api-access-jw9mv\") pod \"route-controller-manager-6c6f6c447c-4rzrl\" (UID: \"cd27896d-ddee-4374-ad3b-dca8f6f7f5d4\") " pod="openshift-route-controller-manager/route-controller-manager-6c6f6c447c-4rzrl" Jan 29 16:16:12 crc kubenswrapper[4714]: I0129 16:16:12.179484 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd27896d-ddee-4374-ad3b-dca8f6f7f5d4-client-ca\") pod \"route-controller-manager-6c6f6c447c-4rzrl\" (UID: \"cd27896d-ddee-4374-ad3b-dca8f6f7f5d4\") " pod="openshift-route-controller-manager/route-controller-manager-6c6f6c447c-4rzrl" Jan 29 16:16:12 crc kubenswrapper[4714]: I0129 16:16:12.179570 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd27896d-ddee-4374-ad3b-dca8f6f7f5d4-config\") pod \"route-controller-manager-6c6f6c447c-4rzrl\" (UID: \"cd27896d-ddee-4374-ad3b-dca8f6f7f5d4\") " pod="openshift-route-controller-manager/route-controller-manager-6c6f6c447c-4rzrl" Jan 29 16:16:12 crc kubenswrapper[4714]: I0129 16:16:12.179877 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd27896d-ddee-4374-ad3b-dca8f6f7f5d4-serving-cert\") pod \"route-controller-manager-6c6f6c447c-4rzrl\" (UID: \"cd27896d-ddee-4374-ad3b-dca8f6f7f5d4\") " pod="openshift-route-controller-manager/route-controller-manager-6c6f6c447c-4rzrl" Jan 29 16:16:12 crc kubenswrapper[4714]: I0129 16:16:12.192667 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99c8ea80-a927-49e6-96fb-40c16f486883" path="/var/lib/kubelet/pods/99c8ea80-a927-49e6-96fb-40c16f486883/volumes" Jan 29 16:16:12 crc kubenswrapper[4714]: I0129 16:16:12.281845 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd27896d-ddee-4374-ad3b-dca8f6f7f5d4-config\") pod \"route-controller-manager-6c6f6c447c-4rzrl\" (UID: \"cd27896d-ddee-4374-ad3b-dca8f6f7f5d4\") " pod="openshift-route-controller-manager/route-controller-manager-6c6f6c447c-4rzrl" Jan 29 16:16:12 crc kubenswrapper[4714]: I0129 16:16:12.282026 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd27896d-ddee-4374-ad3b-dca8f6f7f5d4-serving-cert\") pod \"route-controller-manager-6c6f6c447c-4rzrl\" (UID: \"cd27896d-ddee-4374-ad3b-dca8f6f7f5d4\") " pod="openshift-route-controller-manager/route-controller-manager-6c6f6c447c-4rzrl" Jan 29 16:16:12 crc kubenswrapper[4714]: I0129 16:16:12.282078 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw9mv\" (UniqueName: \"kubernetes.io/projected/cd27896d-ddee-4374-ad3b-dca8f6f7f5d4-kube-api-access-jw9mv\") pod \"route-controller-manager-6c6f6c447c-4rzrl\" (UID: \"cd27896d-ddee-4374-ad3b-dca8f6f7f5d4\") " pod="openshift-route-controller-manager/route-controller-manager-6c6f6c447c-4rzrl" Jan 29 16:16:12 crc kubenswrapper[4714]: I0129 16:16:12.282139 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd27896d-ddee-4374-ad3b-dca8f6f7f5d4-client-ca\") pod \"route-controller-manager-6c6f6c447c-4rzrl\" (UID: \"cd27896d-ddee-4374-ad3b-dca8f6f7f5d4\") " pod="openshift-route-controller-manager/route-controller-manager-6c6f6c447c-4rzrl" Jan 29 16:16:12 crc kubenswrapper[4714]: I0129 16:16:12.284171 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd27896d-ddee-4374-ad3b-dca8f6f7f5d4-client-ca\") pod \"route-controller-manager-6c6f6c447c-4rzrl\" (UID: \"cd27896d-ddee-4374-ad3b-dca8f6f7f5d4\") " pod="openshift-route-controller-manager/route-controller-manager-6c6f6c447c-4rzrl" Jan 29 16:16:12 crc kubenswrapper[4714]: I0129 16:16:12.285297 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd27896d-ddee-4374-ad3b-dca8f6f7f5d4-config\") pod \"route-controller-manager-6c6f6c447c-4rzrl\" (UID: \"cd27896d-ddee-4374-ad3b-dca8f6f7f5d4\") " pod="openshift-route-controller-manager/route-controller-manager-6c6f6c447c-4rzrl" Jan 29 16:16:12 crc kubenswrapper[4714]: I0129 16:16:12.290768 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd27896d-ddee-4374-ad3b-dca8f6f7f5d4-serving-cert\") pod \"route-controller-manager-6c6f6c447c-4rzrl\" (UID: \"cd27896d-ddee-4374-ad3b-dca8f6f7f5d4\") " pod="openshift-route-controller-manager/route-controller-manager-6c6f6c447c-4rzrl" Jan 29 16:16:12 crc kubenswrapper[4714]: I0129 16:16:12.319631 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw9mv\" (UniqueName: \"kubernetes.io/projected/cd27896d-ddee-4374-ad3b-dca8f6f7f5d4-kube-api-access-jw9mv\") pod \"route-controller-manager-6c6f6c447c-4rzrl\" (UID: \"cd27896d-ddee-4374-ad3b-dca8f6f7f5d4\") " pod="openshift-route-controller-manager/route-controller-manager-6c6f6c447c-4rzrl" Jan 29 16:16:12 crc kubenswrapper[4714]: I0129 16:16:12.379843 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c6f6c447c-4rzrl" Jan 29 16:16:12 crc kubenswrapper[4714]: I0129 16:16:12.679898 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c6f6c447c-4rzrl"] Jan 29 16:16:12 crc kubenswrapper[4714]: W0129 16:16:12.690230 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd27896d_ddee_4374_ad3b_dca8f6f7f5d4.slice/crio-4be1e858e531185043040e9e09cb296c9b0d2208caa7180ed7beaebb666fa3ef WatchSource:0}: Error finding container 4be1e858e531185043040e9e09cb296c9b0d2208caa7180ed7beaebb666fa3ef: Status 404 returned error can't find the container with id 4be1e858e531185043040e9e09cb296c9b0d2208caa7180ed7beaebb666fa3ef Jan 29 16:16:13 crc kubenswrapper[4714]: I0129 16:16:13.302798 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c6f6c447c-4rzrl" event={"ID":"cd27896d-ddee-4374-ad3b-dca8f6f7f5d4","Type":"ContainerStarted","Data":"6e0400d6dd5564863c84909aa18ec4f8123768e7a6efb155e6816fa23ca4a8de"} Jan 29 16:16:13 crc kubenswrapper[4714]: I0129 16:16:13.303250 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c6f6c447c-4rzrl" event={"ID":"cd27896d-ddee-4374-ad3b-dca8f6f7f5d4","Type":"ContainerStarted","Data":"4be1e858e531185043040e9e09cb296c9b0d2208caa7180ed7beaebb666fa3ef"} Jan 29 16:16:13 crc kubenswrapper[4714]: I0129 16:16:13.335876 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6c6f6c447c-4rzrl" podStartSLOduration=3.335838791 podStartE2EDuration="3.335838791s" podCreationTimestamp="2026-01-29 16:16:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:16:13.329511332 +0000 UTC m=+379.850012452" watchObservedRunningTime="2026-01-29 16:16:13.335838791 +0000 UTC m=+379.856339951" Jan 29 16:16:14 crc kubenswrapper[4714]: I0129 16:16:14.309006 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6c6f6c447c-4rzrl" Jan 29 16:16:14 crc kubenswrapper[4714]: I0129 16:16:14.315027 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6c6f6c447c-4rzrl" Jan 29 16:16:23 crc kubenswrapper[4714]: I0129 16:16:23.806159 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-74twj"] Jan 29 16:16:23 crc kubenswrapper[4714]: I0129 16:16:23.807139 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-74twj" podUID="a97ed1ff-657f-4bde-943b-78caf9d07f92" containerName="registry-server" containerID="cri-o://0725e5fe8581c9f5ab88fa8ad4af11d5f996f4972602031ec88175b5fda32f79" gracePeriod=30 Jan 29 16:16:23 crc kubenswrapper[4714]: I0129 16:16:23.817809 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xtr82"] Jan 29 16:16:23 crc kubenswrapper[4714]: I0129 16:16:23.818142 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xtr82" podUID="11a30de8-b234-47b4-8fd0-44f0c428be78" containerName="registry-server" containerID="cri-o://a2c143c1c72b06e6085dcb21f057a3e45d817ac3a8bf8d7e3516db54610f130e" gracePeriod=30 Jan 29 16:16:23 crc kubenswrapper[4714]: I0129 16:16:23.826389 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l2t56"] Jan 29 16:16:23 crc kubenswrapper[4714]: I0129 16:16:23.826851 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-l2t56" podUID="80515d06-c09e-4c9d-a90f-43cc84edf4c9" containerName="marketplace-operator" containerID="cri-o://2cfb1164c8a5f24d11bd2a23214b6a7408be50990447c790085e11ea6faaec50" gracePeriod=30 Jan 29 16:16:23 crc kubenswrapper[4714]: I0129 16:16:23.840345 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7rvrl"] Jan 29 16:16:23 crc kubenswrapper[4714]: I0129 16:16:23.841304 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7rvrl" Jan 29 16:16:23 crc kubenswrapper[4714]: I0129 16:16:23.844790 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nssrv"] Jan 29 16:16:23 crc kubenswrapper[4714]: I0129 16:16:23.845107 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nssrv" podUID="eae853ba-61c9-439b-9dc9-21567075f18a" containerName="registry-server" containerID="cri-o://52559e4420e032272c3033326a96872ff005d794cf95b2dce2bffa130b6cf392" gracePeriod=30 Jan 29 16:16:23 crc kubenswrapper[4714]: I0129 16:16:23.865858 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lb68h"] Jan 29 16:16:23 crc kubenswrapper[4714]: I0129 16:16:23.866485 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lb68h" podUID="d05e7c79-7d66-4453-aedb-f240784ff294" containerName="registry-server" containerID="cri-o://25adf524f9c5473bfa242fd63827380ee76b9012a8eb32666464c503ff1ae5f5" gracePeriod=30 Jan 29 16:16:23 crc kubenswrapper[4714]: I0129 16:16:23.874213 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7rvrl"] Jan 29 16:16:23 crc kubenswrapper[4714]: I0129 16:16:23.970976 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2696757f-83ca-42df-9855-f76adeee02bb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7rvrl\" (UID: \"2696757f-83ca-42df-9855-f76adeee02bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-7rvrl" Jan 29 16:16:23 crc kubenswrapper[4714]: I0129 16:16:23.971174 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4dlv\" (UniqueName: \"kubernetes.io/projected/2696757f-83ca-42df-9855-f76adeee02bb-kube-api-access-m4dlv\") pod \"marketplace-operator-79b997595-7rvrl\" (UID: \"2696757f-83ca-42df-9855-f76adeee02bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-7rvrl" Jan 29 16:16:23 crc kubenswrapper[4714]: I0129 16:16:23.971267 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2696757f-83ca-42df-9855-f76adeee02bb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7rvrl\" (UID: \"2696757f-83ca-42df-9855-f76adeee02bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-7rvrl" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.073884 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2696757f-83ca-42df-9855-f76adeee02bb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7rvrl\" (UID: \"2696757f-83ca-42df-9855-f76adeee02bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-7rvrl" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.073993 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2696757f-83ca-42df-9855-f76adeee02bb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7rvrl\" (UID: \"2696757f-83ca-42df-9855-f76adeee02bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-7rvrl" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.074058 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4dlv\" (UniqueName: \"kubernetes.io/projected/2696757f-83ca-42df-9855-f76adeee02bb-kube-api-access-m4dlv\") pod \"marketplace-operator-79b997595-7rvrl\" (UID: \"2696757f-83ca-42df-9855-f76adeee02bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-7rvrl" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.075603 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2696757f-83ca-42df-9855-f76adeee02bb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7rvrl\" (UID: \"2696757f-83ca-42df-9855-f76adeee02bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-7rvrl" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.089361 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2696757f-83ca-42df-9855-f76adeee02bb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7rvrl\" (UID: \"2696757f-83ca-42df-9855-f76adeee02bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-7rvrl" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.107300 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4dlv\" (UniqueName: \"kubernetes.io/projected/2696757f-83ca-42df-9855-f76adeee02bb-kube-api-access-m4dlv\") pod \"marketplace-operator-79b997595-7rvrl\" (UID: \"2696757f-83ca-42df-9855-f76adeee02bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-7rvrl" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.242298 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7rvrl" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.258475 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xtr82" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.262816 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-74twj" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.268633 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-l2t56" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.352774 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nssrv" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.361438 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lb68h" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.383616 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11a30de8-b234-47b4-8fd0-44f0c428be78-utilities\") pod \"11a30de8-b234-47b4-8fd0-44f0c428be78\" (UID: \"11a30de8-b234-47b4-8fd0-44f0c428be78\") " Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.383699 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a97ed1ff-657f-4bde-943b-78caf9d07f92-catalog-content\") pod \"a97ed1ff-657f-4bde-943b-78caf9d07f92\" (UID: \"a97ed1ff-657f-4bde-943b-78caf9d07f92\") " Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.383793 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11a30de8-b234-47b4-8fd0-44f0c428be78-catalog-content\") pod \"11a30de8-b234-47b4-8fd0-44f0c428be78\" (UID: \"11a30de8-b234-47b4-8fd0-44f0c428be78\") " Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.383822 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a97ed1ff-657f-4bde-943b-78caf9d07f92-utilities\") pod \"a97ed1ff-657f-4bde-943b-78caf9d07f92\" (UID: \"a97ed1ff-657f-4bde-943b-78caf9d07f92\") " Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.383874 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbn22\" (UniqueName: \"kubernetes.io/projected/11a30de8-b234-47b4-8fd0-44f0c428be78-kube-api-access-zbn22\") pod \"11a30de8-b234-47b4-8fd0-44f0c428be78\" (UID: \"11a30de8-b234-47b4-8fd0-44f0c428be78\") " Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.383904 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/80515d06-c09e-4c9d-a90f-43cc84edf4c9-marketplace-operator-metrics\") pod \"80515d06-c09e-4c9d-a90f-43cc84edf4c9\" (UID: \"80515d06-c09e-4c9d-a90f-43cc84edf4c9\") " Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.383960 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8bfv\" (UniqueName: \"kubernetes.io/projected/a97ed1ff-657f-4bde-943b-78caf9d07f92-kube-api-access-v8bfv\") pod \"a97ed1ff-657f-4bde-943b-78caf9d07f92\" (UID: \"a97ed1ff-657f-4bde-943b-78caf9d07f92\") " Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.383990 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfc66\" (UniqueName: \"kubernetes.io/projected/80515d06-c09e-4c9d-a90f-43cc84edf4c9-kube-api-access-xfc66\") pod \"80515d06-c09e-4c9d-a90f-43cc84edf4c9\" (UID: \"80515d06-c09e-4c9d-a90f-43cc84edf4c9\") " Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.384067 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80515d06-c09e-4c9d-a90f-43cc84edf4c9-marketplace-trusted-ca\") pod \"80515d06-c09e-4c9d-a90f-43cc84edf4c9\" (UID: \"80515d06-c09e-4c9d-a90f-43cc84edf4c9\") " Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.384871 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11a30de8-b234-47b4-8fd0-44f0c428be78-utilities" (OuterVolumeSpecName: "utilities") pod "11a30de8-b234-47b4-8fd0-44f0c428be78" (UID: "11a30de8-b234-47b4-8fd0-44f0c428be78"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.385770 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80515d06-c09e-4c9d-a90f-43cc84edf4c9-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "80515d06-c09e-4c9d-a90f-43cc84edf4c9" (UID: "80515d06-c09e-4c9d-a90f-43cc84edf4c9"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.385965 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a97ed1ff-657f-4bde-943b-78caf9d07f92-utilities" (OuterVolumeSpecName: "utilities") pod "a97ed1ff-657f-4bde-943b-78caf9d07f92" (UID: "a97ed1ff-657f-4bde-943b-78caf9d07f92"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.389618 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11a30de8-b234-47b4-8fd0-44f0c428be78-kube-api-access-zbn22" (OuterVolumeSpecName: "kube-api-access-zbn22") pod "11a30de8-b234-47b4-8fd0-44f0c428be78" (UID: "11a30de8-b234-47b4-8fd0-44f0c428be78"). InnerVolumeSpecName "kube-api-access-zbn22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.390865 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a97ed1ff-657f-4bde-943b-78caf9d07f92-kube-api-access-v8bfv" (OuterVolumeSpecName: "kube-api-access-v8bfv") pod "a97ed1ff-657f-4bde-943b-78caf9d07f92" (UID: "a97ed1ff-657f-4bde-943b-78caf9d07f92"). InnerVolumeSpecName "kube-api-access-v8bfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.407329 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80515d06-c09e-4c9d-a90f-43cc84edf4c9-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "80515d06-c09e-4c9d-a90f-43cc84edf4c9" (UID: "80515d06-c09e-4c9d-a90f-43cc84edf4c9"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.410003 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80515d06-c09e-4c9d-a90f-43cc84edf4c9-kube-api-access-xfc66" (OuterVolumeSpecName: "kube-api-access-xfc66") pod "80515d06-c09e-4c9d-a90f-43cc84edf4c9" (UID: "80515d06-c09e-4c9d-a90f-43cc84edf4c9"). InnerVolumeSpecName "kube-api-access-xfc66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.411702 4714 generic.go:334] "Generic (PLEG): container finished" podID="11a30de8-b234-47b4-8fd0-44f0c428be78" containerID="a2c143c1c72b06e6085dcb21f057a3e45d817ac3a8bf8d7e3516db54610f130e" exitCode=0 Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.411792 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xtr82" event={"ID":"11a30de8-b234-47b4-8fd0-44f0c428be78","Type":"ContainerDied","Data":"a2c143c1c72b06e6085dcb21f057a3e45d817ac3a8bf8d7e3516db54610f130e"} Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.411823 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xtr82" event={"ID":"11a30de8-b234-47b4-8fd0-44f0c428be78","Type":"ContainerDied","Data":"d1e11cf94d1ae7d280d25746da20bca8871b9f9c8323efe87d1cfb324504d7a1"} Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.411842 4714 scope.go:117] "RemoveContainer" containerID="a2c143c1c72b06e6085dcb21f057a3e45d817ac3a8bf8d7e3516db54610f130e" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.412121 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xtr82" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.419319 4714 generic.go:334] "Generic (PLEG): container finished" podID="a97ed1ff-657f-4bde-943b-78caf9d07f92" containerID="0725e5fe8581c9f5ab88fa8ad4af11d5f996f4972602031ec88175b5fda32f79" exitCode=0 Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.419457 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-74twj" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.419599 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74twj" event={"ID":"a97ed1ff-657f-4bde-943b-78caf9d07f92","Type":"ContainerDied","Data":"0725e5fe8581c9f5ab88fa8ad4af11d5f996f4972602031ec88175b5fda32f79"} Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.419672 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74twj" event={"ID":"a97ed1ff-657f-4bde-943b-78caf9d07f92","Type":"ContainerDied","Data":"27e514e7925336355503e562c2b866089bbb8f20f6235853c55635bfeebcfe8c"} Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.426202 4714 generic.go:334] "Generic (PLEG): container finished" podID="d05e7c79-7d66-4453-aedb-f240784ff294" containerID="25adf524f9c5473bfa242fd63827380ee76b9012a8eb32666464c503ff1ae5f5" exitCode=0 Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.426293 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb68h" event={"ID":"d05e7c79-7d66-4453-aedb-f240784ff294","Type":"ContainerDied","Data":"25adf524f9c5473bfa242fd63827380ee76b9012a8eb32666464c503ff1ae5f5"} Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.426347 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb68h" event={"ID":"d05e7c79-7d66-4453-aedb-f240784ff294","Type":"ContainerDied","Data":"5e22f2e727671a2879c86dcb9146aebbe76ddedf77fd5e705c834b21cf8bd941"} Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.426449 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lb68h" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.429181 4714 generic.go:334] "Generic (PLEG): container finished" podID="80515d06-c09e-4c9d-a90f-43cc84edf4c9" containerID="2cfb1164c8a5f24d11bd2a23214b6a7408be50990447c790085e11ea6faaec50" exitCode=0 Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.429232 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-l2t56" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.429256 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-l2t56" event={"ID":"80515d06-c09e-4c9d-a90f-43cc84edf4c9","Type":"ContainerDied","Data":"2cfb1164c8a5f24d11bd2a23214b6a7408be50990447c790085e11ea6faaec50"} Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.429295 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-l2t56" event={"ID":"80515d06-c09e-4c9d-a90f-43cc84edf4c9","Type":"ContainerDied","Data":"5880f1855bae3fd6f603655d40b770623f038db9a3cb9db3918877f801567acc"} Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.431495 4714 generic.go:334] "Generic (PLEG): container finished" podID="eae853ba-61c9-439b-9dc9-21567075f18a" containerID="52559e4420e032272c3033326a96872ff005d794cf95b2dce2bffa130b6cf392" exitCode=0 Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.431530 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nssrv" event={"ID":"eae853ba-61c9-439b-9dc9-21567075f18a","Type":"ContainerDied","Data":"52559e4420e032272c3033326a96872ff005d794cf95b2dce2bffa130b6cf392"} Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.431555 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nssrv" event={"ID":"eae853ba-61c9-439b-9dc9-21567075f18a","Type":"ContainerDied","Data":"11eca2d99e975c8d4c6d498c418a6ed86174580092ad733d4cf31d057f9d974e"} Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.431616 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nssrv" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.452783 4714 scope.go:117] "RemoveContainer" containerID="8a54a45940dea4f60292ad1cc53a3fa404d30031632cdd928c54b5c498a0d947" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.461050 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l2t56"] Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.464726 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l2t56"] Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.466222 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a97ed1ff-657f-4bde-943b-78caf9d07f92-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a97ed1ff-657f-4bde-943b-78caf9d07f92" (UID: "a97ed1ff-657f-4bde-943b-78caf9d07f92"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.469818 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11a30de8-b234-47b4-8fd0-44f0c428be78-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "11a30de8-b234-47b4-8fd0-44f0c428be78" (UID: "11a30de8-b234-47b4-8fd0-44f0c428be78"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.472824 4714 scope.go:117] "RemoveContainer" containerID="70b956b273a4676b7f7a5e461f43c09f1067f834a7d754d57cae60e15152821d" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.485521 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntklh\" (UniqueName: \"kubernetes.io/projected/eae853ba-61c9-439b-9dc9-21567075f18a-kube-api-access-ntklh\") pod \"eae853ba-61c9-439b-9dc9-21567075f18a\" (UID: \"eae853ba-61c9-439b-9dc9-21567075f18a\") " Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.485587 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d05e7c79-7d66-4453-aedb-f240784ff294-utilities\") pod \"d05e7c79-7d66-4453-aedb-f240784ff294\" (UID: \"d05e7c79-7d66-4453-aedb-f240784ff294\") " Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.485615 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d05e7c79-7d66-4453-aedb-f240784ff294-catalog-content\") pod \"d05e7c79-7d66-4453-aedb-f240784ff294\" (UID: \"d05e7c79-7d66-4453-aedb-f240784ff294\") " Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.485658 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eae853ba-61c9-439b-9dc9-21567075f18a-utilities\") pod \"eae853ba-61c9-439b-9dc9-21567075f18a\" (UID: \"eae853ba-61c9-439b-9dc9-21567075f18a\") " Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.485695 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eae853ba-61c9-439b-9dc9-21567075f18a-catalog-content\") pod \"eae853ba-61c9-439b-9dc9-21567075f18a\" (UID: \"eae853ba-61c9-439b-9dc9-21567075f18a\") " Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.485737 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4jjj\" (UniqueName: \"kubernetes.io/projected/d05e7c79-7d66-4453-aedb-f240784ff294-kube-api-access-m4jjj\") pod \"d05e7c79-7d66-4453-aedb-f240784ff294\" (UID: \"d05e7c79-7d66-4453-aedb-f240784ff294\") " Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.485980 4714 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11a30de8-b234-47b4-8fd0-44f0c428be78-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.485997 4714 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a97ed1ff-657f-4bde-943b-78caf9d07f92-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.486010 4714 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11a30de8-b234-47b4-8fd0-44f0c428be78-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.486022 4714 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a97ed1ff-657f-4bde-943b-78caf9d07f92-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.486032 4714 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/80515d06-c09e-4c9d-a90f-43cc84edf4c9-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.486041 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbn22\" (UniqueName: \"kubernetes.io/projected/11a30de8-b234-47b4-8fd0-44f0c428be78-kube-api-access-zbn22\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.486049 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8bfv\" (UniqueName: \"kubernetes.io/projected/a97ed1ff-657f-4bde-943b-78caf9d07f92-kube-api-access-v8bfv\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.486058 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfc66\" (UniqueName: \"kubernetes.io/projected/80515d06-c09e-4c9d-a90f-43cc84edf4c9-kube-api-access-xfc66\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.486069 4714 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80515d06-c09e-4c9d-a90f-43cc84edf4c9-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.488395 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d05e7c79-7d66-4453-aedb-f240784ff294-utilities" (OuterVolumeSpecName: "utilities") pod "d05e7c79-7d66-4453-aedb-f240784ff294" (UID: "d05e7c79-7d66-4453-aedb-f240784ff294"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.488544 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eae853ba-61c9-439b-9dc9-21567075f18a-utilities" (OuterVolumeSpecName: "utilities") pod "eae853ba-61c9-439b-9dc9-21567075f18a" (UID: "eae853ba-61c9-439b-9dc9-21567075f18a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.491335 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eae853ba-61c9-439b-9dc9-21567075f18a-kube-api-access-ntklh" (OuterVolumeSpecName: "kube-api-access-ntklh") pod "eae853ba-61c9-439b-9dc9-21567075f18a" (UID: "eae853ba-61c9-439b-9dc9-21567075f18a"). InnerVolumeSpecName "kube-api-access-ntklh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.492030 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d05e7c79-7d66-4453-aedb-f240784ff294-kube-api-access-m4jjj" (OuterVolumeSpecName: "kube-api-access-m4jjj") pod "d05e7c79-7d66-4453-aedb-f240784ff294" (UID: "d05e7c79-7d66-4453-aedb-f240784ff294"). InnerVolumeSpecName "kube-api-access-m4jjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.508404 4714 scope.go:117] "RemoveContainer" containerID="a2c143c1c72b06e6085dcb21f057a3e45d817ac3a8bf8d7e3516db54610f130e" Jan 29 16:16:24 crc kubenswrapper[4714]: E0129 16:16:24.509022 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2c143c1c72b06e6085dcb21f057a3e45d817ac3a8bf8d7e3516db54610f130e\": container with ID starting with a2c143c1c72b06e6085dcb21f057a3e45d817ac3a8bf8d7e3516db54610f130e not found: ID does not exist" containerID="a2c143c1c72b06e6085dcb21f057a3e45d817ac3a8bf8d7e3516db54610f130e" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.509060 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2c143c1c72b06e6085dcb21f057a3e45d817ac3a8bf8d7e3516db54610f130e"} err="failed to get container status \"a2c143c1c72b06e6085dcb21f057a3e45d817ac3a8bf8d7e3516db54610f130e\": rpc error: code = NotFound desc = could not find container \"a2c143c1c72b06e6085dcb21f057a3e45d817ac3a8bf8d7e3516db54610f130e\": container with ID starting with a2c143c1c72b06e6085dcb21f057a3e45d817ac3a8bf8d7e3516db54610f130e not found: ID does not exist" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.509092 4714 scope.go:117] "RemoveContainer" containerID="8a54a45940dea4f60292ad1cc53a3fa404d30031632cdd928c54b5c498a0d947" Jan 29 16:16:24 crc kubenswrapper[4714]: E0129 16:16:24.509458 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a54a45940dea4f60292ad1cc53a3fa404d30031632cdd928c54b5c498a0d947\": container with ID starting with 8a54a45940dea4f60292ad1cc53a3fa404d30031632cdd928c54b5c498a0d947 not found: ID does not exist" containerID="8a54a45940dea4f60292ad1cc53a3fa404d30031632cdd928c54b5c498a0d947" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.509473 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a54a45940dea4f60292ad1cc53a3fa404d30031632cdd928c54b5c498a0d947"} err="failed to get container status \"8a54a45940dea4f60292ad1cc53a3fa404d30031632cdd928c54b5c498a0d947\": rpc error: code = NotFound desc = could not find container \"8a54a45940dea4f60292ad1cc53a3fa404d30031632cdd928c54b5c498a0d947\": container with ID starting with 8a54a45940dea4f60292ad1cc53a3fa404d30031632cdd928c54b5c498a0d947 not found: ID does not exist" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.509486 4714 scope.go:117] "RemoveContainer" containerID="70b956b273a4676b7f7a5e461f43c09f1067f834a7d754d57cae60e15152821d" Jan 29 16:16:24 crc kubenswrapper[4714]: E0129 16:16:24.509805 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70b956b273a4676b7f7a5e461f43c09f1067f834a7d754d57cae60e15152821d\": container with ID starting with 70b956b273a4676b7f7a5e461f43c09f1067f834a7d754d57cae60e15152821d not found: ID does not exist" containerID="70b956b273a4676b7f7a5e461f43c09f1067f834a7d754d57cae60e15152821d" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.509837 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70b956b273a4676b7f7a5e461f43c09f1067f834a7d754d57cae60e15152821d"} err="failed to get container status \"70b956b273a4676b7f7a5e461f43c09f1067f834a7d754d57cae60e15152821d\": rpc error: code = NotFound desc = could not find container \"70b956b273a4676b7f7a5e461f43c09f1067f834a7d754d57cae60e15152821d\": container with ID starting with 70b956b273a4676b7f7a5e461f43c09f1067f834a7d754d57cae60e15152821d not found: ID does not exist" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.509853 4714 scope.go:117] "RemoveContainer" containerID="0725e5fe8581c9f5ab88fa8ad4af11d5f996f4972602031ec88175b5fda32f79" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.511783 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eae853ba-61c9-439b-9dc9-21567075f18a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eae853ba-61c9-439b-9dc9-21567075f18a" (UID: "eae853ba-61c9-439b-9dc9-21567075f18a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.525099 4714 scope.go:117] "RemoveContainer" containerID="b55a4a53101476467f28cefb11eb7b554ba1847628876fc42f62a75c8730a4f6" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.542971 4714 scope.go:117] "RemoveContainer" containerID="bebaa193fd909649d996ac5ecb12f75e7aa251dd1a4b1b911882734c87e4b046" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.557637 4714 scope.go:117] "RemoveContainer" containerID="0725e5fe8581c9f5ab88fa8ad4af11d5f996f4972602031ec88175b5fda32f79" Jan 29 16:16:24 crc kubenswrapper[4714]: E0129 16:16:24.558026 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0725e5fe8581c9f5ab88fa8ad4af11d5f996f4972602031ec88175b5fda32f79\": container with ID starting with 0725e5fe8581c9f5ab88fa8ad4af11d5f996f4972602031ec88175b5fda32f79 not found: ID does not exist" containerID="0725e5fe8581c9f5ab88fa8ad4af11d5f996f4972602031ec88175b5fda32f79" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.558068 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0725e5fe8581c9f5ab88fa8ad4af11d5f996f4972602031ec88175b5fda32f79"} err="failed to get container status \"0725e5fe8581c9f5ab88fa8ad4af11d5f996f4972602031ec88175b5fda32f79\": rpc error: code = NotFound desc = could not find container \"0725e5fe8581c9f5ab88fa8ad4af11d5f996f4972602031ec88175b5fda32f79\": container with ID starting with 0725e5fe8581c9f5ab88fa8ad4af11d5f996f4972602031ec88175b5fda32f79 not found: ID does not exist" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.558100 4714 scope.go:117] "RemoveContainer" containerID="b55a4a53101476467f28cefb11eb7b554ba1847628876fc42f62a75c8730a4f6" Jan 29 16:16:24 crc kubenswrapper[4714]: E0129 16:16:24.558380 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b55a4a53101476467f28cefb11eb7b554ba1847628876fc42f62a75c8730a4f6\": container with ID starting with b55a4a53101476467f28cefb11eb7b554ba1847628876fc42f62a75c8730a4f6 not found: ID does not exist" containerID="b55a4a53101476467f28cefb11eb7b554ba1847628876fc42f62a75c8730a4f6" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.558416 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b55a4a53101476467f28cefb11eb7b554ba1847628876fc42f62a75c8730a4f6"} err="failed to get container status \"b55a4a53101476467f28cefb11eb7b554ba1847628876fc42f62a75c8730a4f6\": rpc error: code = NotFound desc = could not find container \"b55a4a53101476467f28cefb11eb7b554ba1847628876fc42f62a75c8730a4f6\": container with ID starting with b55a4a53101476467f28cefb11eb7b554ba1847628876fc42f62a75c8730a4f6 not found: ID does not exist" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.558441 4714 scope.go:117] "RemoveContainer" containerID="bebaa193fd909649d996ac5ecb12f75e7aa251dd1a4b1b911882734c87e4b046" Jan 29 16:16:24 crc kubenswrapper[4714]: E0129 16:16:24.558721 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bebaa193fd909649d996ac5ecb12f75e7aa251dd1a4b1b911882734c87e4b046\": container with ID starting with bebaa193fd909649d996ac5ecb12f75e7aa251dd1a4b1b911882734c87e4b046 not found: ID does not exist" containerID="bebaa193fd909649d996ac5ecb12f75e7aa251dd1a4b1b911882734c87e4b046" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.558745 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bebaa193fd909649d996ac5ecb12f75e7aa251dd1a4b1b911882734c87e4b046"} err="failed to get container status \"bebaa193fd909649d996ac5ecb12f75e7aa251dd1a4b1b911882734c87e4b046\": rpc error: code = NotFound desc = could not find container \"bebaa193fd909649d996ac5ecb12f75e7aa251dd1a4b1b911882734c87e4b046\": container with ID starting with bebaa193fd909649d996ac5ecb12f75e7aa251dd1a4b1b911882734c87e4b046 not found: ID does not exist" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.558768 4714 scope.go:117] "RemoveContainer" containerID="25adf524f9c5473bfa242fd63827380ee76b9012a8eb32666464c503ff1ae5f5" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.574809 4714 scope.go:117] "RemoveContainer" containerID="dc4ea141b6a80ae098de7ca5a17e9e2b1ec3ebf2112f640c6fad4d5fcc75a51c" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.587381 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntklh\" (UniqueName: \"kubernetes.io/projected/eae853ba-61c9-439b-9dc9-21567075f18a-kube-api-access-ntklh\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.587408 4714 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d05e7c79-7d66-4453-aedb-f240784ff294-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.587419 4714 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eae853ba-61c9-439b-9dc9-21567075f18a-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.587428 4714 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eae853ba-61c9-439b-9dc9-21567075f18a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.587436 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4jjj\" (UniqueName: \"kubernetes.io/projected/d05e7c79-7d66-4453-aedb-f240784ff294-kube-api-access-m4jjj\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.588770 4714 scope.go:117] "RemoveContainer" containerID="0e066cf92f43693eba9898b6ef36d8d3eb21b4fa7c877b99db7e3e39dddda134" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.602639 4714 scope.go:117] "RemoveContainer" containerID="25adf524f9c5473bfa242fd63827380ee76b9012a8eb32666464c503ff1ae5f5" Jan 29 16:16:24 crc kubenswrapper[4714]: E0129 16:16:24.604103 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25adf524f9c5473bfa242fd63827380ee76b9012a8eb32666464c503ff1ae5f5\": container with ID starting with 25adf524f9c5473bfa242fd63827380ee76b9012a8eb32666464c503ff1ae5f5 not found: ID does not exist" containerID="25adf524f9c5473bfa242fd63827380ee76b9012a8eb32666464c503ff1ae5f5" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.604141 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25adf524f9c5473bfa242fd63827380ee76b9012a8eb32666464c503ff1ae5f5"} err="failed to get container status \"25adf524f9c5473bfa242fd63827380ee76b9012a8eb32666464c503ff1ae5f5\": rpc error: code = NotFound desc = could not find container \"25adf524f9c5473bfa242fd63827380ee76b9012a8eb32666464c503ff1ae5f5\": container with ID starting with 25adf524f9c5473bfa242fd63827380ee76b9012a8eb32666464c503ff1ae5f5 not found: ID does not exist" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.604174 4714 scope.go:117] "RemoveContainer" containerID="dc4ea141b6a80ae098de7ca5a17e9e2b1ec3ebf2112f640c6fad4d5fcc75a51c" Jan 29 16:16:24 crc kubenswrapper[4714]: E0129 16:16:24.604523 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc4ea141b6a80ae098de7ca5a17e9e2b1ec3ebf2112f640c6fad4d5fcc75a51c\": container with ID starting with dc4ea141b6a80ae098de7ca5a17e9e2b1ec3ebf2112f640c6fad4d5fcc75a51c not found: ID does not exist" containerID="dc4ea141b6a80ae098de7ca5a17e9e2b1ec3ebf2112f640c6fad4d5fcc75a51c" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.604570 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc4ea141b6a80ae098de7ca5a17e9e2b1ec3ebf2112f640c6fad4d5fcc75a51c"} err="failed to get container status \"dc4ea141b6a80ae098de7ca5a17e9e2b1ec3ebf2112f640c6fad4d5fcc75a51c\": rpc error: code = NotFound desc = could not find container \"dc4ea141b6a80ae098de7ca5a17e9e2b1ec3ebf2112f640c6fad4d5fcc75a51c\": container with ID starting with dc4ea141b6a80ae098de7ca5a17e9e2b1ec3ebf2112f640c6fad4d5fcc75a51c not found: ID does not exist" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.604603 4714 scope.go:117] "RemoveContainer" containerID="0e066cf92f43693eba9898b6ef36d8d3eb21b4fa7c877b99db7e3e39dddda134" Jan 29 16:16:24 crc kubenswrapper[4714]: E0129 16:16:24.605309 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e066cf92f43693eba9898b6ef36d8d3eb21b4fa7c877b99db7e3e39dddda134\": container with ID starting with 0e066cf92f43693eba9898b6ef36d8d3eb21b4fa7c877b99db7e3e39dddda134 not found: ID does not exist" containerID="0e066cf92f43693eba9898b6ef36d8d3eb21b4fa7c877b99db7e3e39dddda134" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.605520 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e066cf92f43693eba9898b6ef36d8d3eb21b4fa7c877b99db7e3e39dddda134"} err="failed to get container status \"0e066cf92f43693eba9898b6ef36d8d3eb21b4fa7c877b99db7e3e39dddda134\": rpc error: code = NotFound desc = could not find container \"0e066cf92f43693eba9898b6ef36d8d3eb21b4fa7c877b99db7e3e39dddda134\": container with ID starting with 0e066cf92f43693eba9898b6ef36d8d3eb21b4fa7c877b99db7e3e39dddda134 not found: ID does not exist" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.605542 4714 scope.go:117] "RemoveContainer" containerID="2cfb1164c8a5f24d11bd2a23214b6a7408be50990447c790085e11ea6faaec50" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.614464 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d05e7c79-7d66-4453-aedb-f240784ff294-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d05e7c79-7d66-4453-aedb-f240784ff294" (UID: "d05e7c79-7d66-4453-aedb-f240784ff294"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.622881 4714 scope.go:117] "RemoveContainer" containerID="2cfb1164c8a5f24d11bd2a23214b6a7408be50990447c790085e11ea6faaec50" Jan 29 16:16:24 crc kubenswrapper[4714]: E0129 16:16:24.623691 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cfb1164c8a5f24d11bd2a23214b6a7408be50990447c790085e11ea6faaec50\": container with ID starting with 2cfb1164c8a5f24d11bd2a23214b6a7408be50990447c790085e11ea6faaec50 not found: ID does not exist" containerID="2cfb1164c8a5f24d11bd2a23214b6a7408be50990447c790085e11ea6faaec50" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.623755 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cfb1164c8a5f24d11bd2a23214b6a7408be50990447c790085e11ea6faaec50"} err="failed to get container status \"2cfb1164c8a5f24d11bd2a23214b6a7408be50990447c790085e11ea6faaec50\": rpc error: code = NotFound desc = could not find container \"2cfb1164c8a5f24d11bd2a23214b6a7408be50990447c790085e11ea6faaec50\": container with ID starting with 2cfb1164c8a5f24d11bd2a23214b6a7408be50990447c790085e11ea6faaec50 not found: ID does not exist" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.623800 4714 scope.go:117] "RemoveContainer" containerID="52559e4420e032272c3033326a96872ff005d794cf95b2dce2bffa130b6cf392" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.639088 4714 scope.go:117] "RemoveContainer" containerID="af26cf3e86e4001d9e7f83e8aa3b6ea940d03f589aa1d1907c488eb1df46568b" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.653700 4714 scope.go:117] "RemoveContainer" containerID="4ddfcc2a03d228c12293d91791425de7e03e1ce7bf286f993aef049f853f58fe" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.668871 4714 scope.go:117] "RemoveContainer" containerID="52559e4420e032272c3033326a96872ff005d794cf95b2dce2bffa130b6cf392" Jan 29 16:16:24 crc kubenswrapper[4714]: E0129 16:16:24.669665 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52559e4420e032272c3033326a96872ff005d794cf95b2dce2bffa130b6cf392\": container with ID starting with 52559e4420e032272c3033326a96872ff005d794cf95b2dce2bffa130b6cf392 not found: ID does not exist" containerID="52559e4420e032272c3033326a96872ff005d794cf95b2dce2bffa130b6cf392" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.669709 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52559e4420e032272c3033326a96872ff005d794cf95b2dce2bffa130b6cf392"} err="failed to get container status \"52559e4420e032272c3033326a96872ff005d794cf95b2dce2bffa130b6cf392\": rpc error: code = NotFound desc = could not find container \"52559e4420e032272c3033326a96872ff005d794cf95b2dce2bffa130b6cf392\": container with ID starting with 52559e4420e032272c3033326a96872ff005d794cf95b2dce2bffa130b6cf392 not found: ID does not exist" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.669743 4714 scope.go:117] "RemoveContainer" containerID="af26cf3e86e4001d9e7f83e8aa3b6ea940d03f589aa1d1907c488eb1df46568b" Jan 29 16:16:24 crc kubenswrapper[4714]: E0129 16:16:24.670081 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af26cf3e86e4001d9e7f83e8aa3b6ea940d03f589aa1d1907c488eb1df46568b\": container with ID starting with af26cf3e86e4001d9e7f83e8aa3b6ea940d03f589aa1d1907c488eb1df46568b not found: ID does not exist" containerID="af26cf3e86e4001d9e7f83e8aa3b6ea940d03f589aa1d1907c488eb1df46568b" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.670151 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af26cf3e86e4001d9e7f83e8aa3b6ea940d03f589aa1d1907c488eb1df46568b"} err="failed to get container status \"af26cf3e86e4001d9e7f83e8aa3b6ea940d03f589aa1d1907c488eb1df46568b\": rpc error: code = NotFound desc = could not find container \"af26cf3e86e4001d9e7f83e8aa3b6ea940d03f589aa1d1907c488eb1df46568b\": container with ID starting with af26cf3e86e4001d9e7f83e8aa3b6ea940d03f589aa1d1907c488eb1df46568b not found: ID does not exist" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.670197 4714 scope.go:117] "RemoveContainer" containerID="4ddfcc2a03d228c12293d91791425de7e03e1ce7bf286f993aef049f853f58fe" Jan 29 16:16:24 crc kubenswrapper[4714]: E0129 16:16:24.670527 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ddfcc2a03d228c12293d91791425de7e03e1ce7bf286f993aef049f853f58fe\": container with ID starting with 4ddfcc2a03d228c12293d91791425de7e03e1ce7bf286f993aef049f853f58fe not found: ID does not exist" containerID="4ddfcc2a03d228c12293d91791425de7e03e1ce7bf286f993aef049f853f58fe" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.670561 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ddfcc2a03d228c12293d91791425de7e03e1ce7bf286f993aef049f853f58fe"} err="failed to get container status \"4ddfcc2a03d228c12293d91791425de7e03e1ce7bf286f993aef049f853f58fe\": rpc error: code = NotFound desc = could not find container \"4ddfcc2a03d228c12293d91791425de7e03e1ce7bf286f993aef049f853f58fe\": container with ID starting with 4ddfcc2a03d228c12293d91791425de7e03e1ce7bf286f993aef049f853f58fe not found: ID does not exist" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.688631 4714 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d05e7c79-7d66-4453-aedb-f240784ff294-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.709372 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7rvrl"] Jan 29 16:16:24 crc kubenswrapper[4714]: W0129 16:16:24.712315 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2696757f_83ca_42df_9855_f76adeee02bb.slice/crio-cdd61689243a69bcd2c8fda2e4c87e8c55a3f1474ee1d4589fa09b3921bf8f48 WatchSource:0}: Error finding container cdd61689243a69bcd2c8fda2e4c87e8c55a3f1474ee1d4589fa09b3921bf8f48: Status 404 returned error can't find the container with id cdd61689243a69bcd2c8fda2e4c87e8c55a3f1474ee1d4589fa09b3921bf8f48 Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.755131 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xtr82"] Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.765726 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xtr82"] Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.787098 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lb68h"] Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.790871 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lb68h"] Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.815584 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nssrv"] Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.827458 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nssrv"] Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.831790 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-74twj"] Jan 29 16:16:24 crc kubenswrapper[4714]: I0129 16:16:24.835330 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-74twj"] Jan 29 16:16:25 crc kubenswrapper[4714]: I0129 16:16:25.454397 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7rvrl" event={"ID":"2696757f-83ca-42df-9855-f76adeee02bb","Type":"ContainerStarted","Data":"b1c9a9e34a8f6e1c92c31b01db09a13a690eb1c214066679034d3934a7c755a7"} Jan 29 16:16:25 crc kubenswrapper[4714]: I0129 16:16:25.454818 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7rvrl" event={"ID":"2696757f-83ca-42df-9855-f76adeee02bb","Type":"ContainerStarted","Data":"cdd61689243a69bcd2c8fda2e4c87e8c55a3f1474ee1d4589fa09b3921bf8f48"} Jan 29 16:16:25 crc kubenswrapper[4714]: I0129 16:16:25.455888 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7rvrl" Jan 29 16:16:25 crc kubenswrapper[4714]: I0129 16:16:25.458984 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7rvrl" Jan 29 16:16:25 crc kubenswrapper[4714]: I0129 16:16:25.478029 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7rvrl" podStartSLOduration=2.478013781 podStartE2EDuration="2.478013781s" podCreationTimestamp="2026-01-29 16:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:16:25.477917528 +0000 UTC m=+391.998418688" watchObservedRunningTime="2026-01-29 16:16:25.478013781 +0000 UTC m=+391.998514901" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.195218 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11a30de8-b234-47b4-8fd0-44f0c428be78" path="/var/lib/kubelet/pods/11a30de8-b234-47b4-8fd0-44f0c428be78/volumes" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.196852 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80515d06-c09e-4c9d-a90f-43cc84edf4c9" path="/var/lib/kubelet/pods/80515d06-c09e-4c9d-a90f-43cc84edf4c9/volumes" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.197764 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a97ed1ff-657f-4bde-943b-78caf9d07f92" path="/var/lib/kubelet/pods/a97ed1ff-657f-4bde-943b-78caf9d07f92/volumes" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.200569 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d05e7c79-7d66-4453-aedb-f240784ff294" path="/var/lib/kubelet/pods/d05e7c79-7d66-4453-aedb-f240784ff294/volumes" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.201991 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eae853ba-61c9-439b-9dc9-21567075f18a" path="/var/lib/kubelet/pods/eae853ba-61c9-439b-9dc9-21567075f18a/volumes" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.565257 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6gkpz"] Jan 29 16:16:26 crc kubenswrapper[4714]: E0129 16:16:26.565453 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11a30de8-b234-47b4-8fd0-44f0c428be78" containerName="extract-utilities" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.565465 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="11a30de8-b234-47b4-8fd0-44f0c428be78" containerName="extract-utilities" Jan 29 16:16:26 crc kubenswrapper[4714]: E0129 16:16:26.565498 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97ed1ff-657f-4bde-943b-78caf9d07f92" containerName="extract-content" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.565505 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97ed1ff-657f-4bde-943b-78caf9d07f92" containerName="extract-content" Jan 29 16:16:26 crc kubenswrapper[4714]: E0129 16:16:26.565517 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80515d06-c09e-4c9d-a90f-43cc84edf4c9" containerName="marketplace-operator" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.565523 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="80515d06-c09e-4c9d-a90f-43cc84edf4c9" containerName="marketplace-operator" Jan 29 16:16:26 crc kubenswrapper[4714]: E0129 16:16:26.565530 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97ed1ff-657f-4bde-943b-78caf9d07f92" containerName="registry-server" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.565535 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97ed1ff-657f-4bde-943b-78caf9d07f92" containerName="registry-server" Jan 29 16:16:26 crc kubenswrapper[4714]: E0129 16:16:26.565547 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eae853ba-61c9-439b-9dc9-21567075f18a" containerName="extract-utilities" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.565553 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="eae853ba-61c9-439b-9dc9-21567075f18a" containerName="extract-utilities" Jan 29 16:16:26 crc kubenswrapper[4714]: E0129 16:16:26.565563 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d05e7c79-7d66-4453-aedb-f240784ff294" containerName="registry-server" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.565570 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="d05e7c79-7d66-4453-aedb-f240784ff294" containerName="registry-server" Jan 29 16:16:26 crc kubenswrapper[4714]: E0129 16:16:26.565579 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eae853ba-61c9-439b-9dc9-21567075f18a" containerName="registry-server" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.565587 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="eae853ba-61c9-439b-9dc9-21567075f18a" containerName="registry-server" Jan 29 16:16:26 crc kubenswrapper[4714]: E0129 16:16:26.565597 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d05e7c79-7d66-4453-aedb-f240784ff294" containerName="extract-content" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.565603 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="d05e7c79-7d66-4453-aedb-f240784ff294" containerName="extract-content" Jan 29 16:16:26 crc kubenswrapper[4714]: E0129 16:16:26.565611 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eae853ba-61c9-439b-9dc9-21567075f18a" containerName="extract-content" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.565617 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="eae853ba-61c9-439b-9dc9-21567075f18a" containerName="extract-content" Jan 29 16:16:26 crc kubenswrapper[4714]: E0129 16:16:26.565623 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11a30de8-b234-47b4-8fd0-44f0c428be78" containerName="registry-server" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.565629 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="11a30de8-b234-47b4-8fd0-44f0c428be78" containerName="registry-server" Jan 29 16:16:26 crc kubenswrapper[4714]: E0129 16:16:26.565637 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d05e7c79-7d66-4453-aedb-f240784ff294" containerName="extract-utilities" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.565642 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="d05e7c79-7d66-4453-aedb-f240784ff294" containerName="extract-utilities" Jan 29 16:16:26 crc kubenswrapper[4714]: E0129 16:16:26.565652 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11a30de8-b234-47b4-8fd0-44f0c428be78" containerName="extract-content" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.565657 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="11a30de8-b234-47b4-8fd0-44f0c428be78" containerName="extract-content" Jan 29 16:16:26 crc kubenswrapper[4714]: E0129 16:16:26.565665 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97ed1ff-657f-4bde-943b-78caf9d07f92" containerName="extract-utilities" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.565671 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97ed1ff-657f-4bde-943b-78caf9d07f92" containerName="extract-utilities" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.565915 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="80515d06-c09e-4c9d-a90f-43cc84edf4c9" containerName="marketplace-operator" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.565939 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="d05e7c79-7d66-4453-aedb-f240784ff294" containerName="registry-server" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.565949 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="11a30de8-b234-47b4-8fd0-44f0c428be78" containerName="registry-server" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.565956 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="eae853ba-61c9-439b-9dc9-21567075f18a" containerName="registry-server" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.565965 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="a97ed1ff-657f-4bde-943b-78caf9d07f92" containerName="registry-server" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.566655 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gkpz" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.569568 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.577042 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gkpz"] Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.714862 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04dba3a0-a89b-48c5-97ef-e5660d1ae7bb-utilities\") pod \"redhat-marketplace-6gkpz\" (UID: \"04dba3a0-a89b-48c5-97ef-e5660d1ae7bb\") " pod="openshift-marketplace/redhat-marketplace-6gkpz" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.714910 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trwfb\" (UniqueName: \"kubernetes.io/projected/04dba3a0-a89b-48c5-97ef-e5660d1ae7bb-kube-api-access-trwfb\") pod \"redhat-marketplace-6gkpz\" (UID: \"04dba3a0-a89b-48c5-97ef-e5660d1ae7bb\") " pod="openshift-marketplace/redhat-marketplace-6gkpz" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.714967 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04dba3a0-a89b-48c5-97ef-e5660d1ae7bb-catalog-content\") pod \"redhat-marketplace-6gkpz\" (UID: \"04dba3a0-a89b-48c5-97ef-e5660d1ae7bb\") " pod="openshift-marketplace/redhat-marketplace-6gkpz" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.764213 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-knxc8"] Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.766571 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-knxc8" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.768196 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.808808 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-knxc8"] Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.816645 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04dba3a0-a89b-48c5-97ef-e5660d1ae7bb-utilities\") pod \"redhat-marketplace-6gkpz\" (UID: \"04dba3a0-a89b-48c5-97ef-e5660d1ae7bb\") " pod="openshift-marketplace/redhat-marketplace-6gkpz" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.816700 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trwfb\" (UniqueName: \"kubernetes.io/projected/04dba3a0-a89b-48c5-97ef-e5660d1ae7bb-kube-api-access-trwfb\") pod \"redhat-marketplace-6gkpz\" (UID: \"04dba3a0-a89b-48c5-97ef-e5660d1ae7bb\") " pod="openshift-marketplace/redhat-marketplace-6gkpz" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.816740 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04dba3a0-a89b-48c5-97ef-e5660d1ae7bb-catalog-content\") pod \"redhat-marketplace-6gkpz\" (UID: \"04dba3a0-a89b-48c5-97ef-e5660d1ae7bb\") " pod="openshift-marketplace/redhat-marketplace-6gkpz" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.817299 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04dba3a0-a89b-48c5-97ef-e5660d1ae7bb-utilities\") pod \"redhat-marketplace-6gkpz\" (UID: \"04dba3a0-a89b-48c5-97ef-e5660d1ae7bb\") " pod="openshift-marketplace/redhat-marketplace-6gkpz" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.817324 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04dba3a0-a89b-48c5-97ef-e5660d1ae7bb-catalog-content\") pod \"redhat-marketplace-6gkpz\" (UID: \"04dba3a0-a89b-48c5-97ef-e5660d1ae7bb\") " pod="openshift-marketplace/redhat-marketplace-6gkpz" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.834233 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trwfb\" (UniqueName: \"kubernetes.io/projected/04dba3a0-a89b-48c5-97ef-e5660d1ae7bb-kube-api-access-trwfb\") pod \"redhat-marketplace-6gkpz\" (UID: \"04dba3a0-a89b-48c5-97ef-e5660d1ae7bb\") " pod="openshift-marketplace/redhat-marketplace-6gkpz" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.916595 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gkpz" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.917616 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de6c9fbd-8657-4434-bff5-468276791466-catalog-content\") pod \"redhat-operators-knxc8\" (UID: \"de6c9fbd-8657-4434-bff5-468276791466\") " pod="openshift-marketplace/redhat-operators-knxc8" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.917675 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de6c9fbd-8657-4434-bff5-468276791466-utilities\") pod \"redhat-operators-knxc8\" (UID: \"de6c9fbd-8657-4434-bff5-468276791466\") " pod="openshift-marketplace/redhat-operators-knxc8" Jan 29 16:16:26 crc kubenswrapper[4714]: I0129 16:16:26.917728 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75mht\" (UniqueName: \"kubernetes.io/projected/de6c9fbd-8657-4434-bff5-468276791466-kube-api-access-75mht\") pod \"redhat-operators-knxc8\" (UID: \"de6c9fbd-8657-4434-bff5-468276791466\") " pod="openshift-marketplace/redhat-operators-knxc8" Jan 29 16:16:27 crc kubenswrapper[4714]: I0129 16:16:27.018951 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de6c9fbd-8657-4434-bff5-468276791466-utilities\") pod \"redhat-operators-knxc8\" (UID: \"de6c9fbd-8657-4434-bff5-468276791466\") " pod="openshift-marketplace/redhat-operators-knxc8" Jan 29 16:16:27 crc kubenswrapper[4714]: I0129 16:16:27.019254 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75mht\" (UniqueName: \"kubernetes.io/projected/de6c9fbd-8657-4434-bff5-468276791466-kube-api-access-75mht\") pod \"redhat-operators-knxc8\" (UID: \"de6c9fbd-8657-4434-bff5-468276791466\") " pod="openshift-marketplace/redhat-operators-knxc8" Jan 29 16:16:27 crc kubenswrapper[4714]: I0129 16:16:27.019330 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de6c9fbd-8657-4434-bff5-468276791466-catalog-content\") pod \"redhat-operators-knxc8\" (UID: \"de6c9fbd-8657-4434-bff5-468276791466\") " pod="openshift-marketplace/redhat-operators-knxc8" Jan 29 16:16:27 crc kubenswrapper[4714]: I0129 16:16:27.019553 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de6c9fbd-8657-4434-bff5-468276791466-utilities\") pod \"redhat-operators-knxc8\" (UID: \"de6c9fbd-8657-4434-bff5-468276791466\") " pod="openshift-marketplace/redhat-operators-knxc8" Jan 29 16:16:27 crc kubenswrapper[4714]: I0129 16:16:27.019754 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de6c9fbd-8657-4434-bff5-468276791466-catalog-content\") pod \"redhat-operators-knxc8\" (UID: \"de6c9fbd-8657-4434-bff5-468276791466\") " pod="openshift-marketplace/redhat-operators-knxc8" Jan 29 16:16:27 crc kubenswrapper[4714]: I0129 16:16:27.037239 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75mht\" (UniqueName: \"kubernetes.io/projected/de6c9fbd-8657-4434-bff5-468276791466-kube-api-access-75mht\") pod \"redhat-operators-knxc8\" (UID: \"de6c9fbd-8657-4434-bff5-468276791466\") " pod="openshift-marketplace/redhat-operators-knxc8" Jan 29 16:16:27 crc kubenswrapper[4714]: I0129 16:16:27.089694 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-knxc8" Jan 29 16:16:27 crc kubenswrapper[4714]: I0129 16:16:27.128034 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gkpz"] Jan 29 16:16:27 crc kubenswrapper[4714]: W0129 16:16:27.132607 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04dba3a0_a89b_48c5_97ef_e5660d1ae7bb.slice/crio-b49151a6cd20c5c170981578deb8f14550600f3f56ff6e1d1159c6320fbd9d56 WatchSource:0}: Error finding container b49151a6cd20c5c170981578deb8f14550600f3f56ff6e1d1159c6320fbd9d56: Status 404 returned error can't find the container with id b49151a6cd20c5c170981578deb8f14550600f3f56ff6e1d1159c6320fbd9d56 Jan 29 16:16:27 crc kubenswrapper[4714]: I0129 16:16:27.477512 4714 generic.go:334] "Generic (PLEG): container finished" podID="04dba3a0-a89b-48c5-97ef-e5660d1ae7bb" containerID="6787483294efed802480404e6f60af4b4a8206cdea6bbd7f4a0e458ef6b4913c" exitCode=0 Jan 29 16:16:27 crc kubenswrapper[4714]: I0129 16:16:27.479174 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gkpz" event={"ID":"04dba3a0-a89b-48c5-97ef-e5660d1ae7bb","Type":"ContainerDied","Data":"6787483294efed802480404e6f60af4b4a8206cdea6bbd7f4a0e458ef6b4913c"} Jan 29 16:16:27 crc kubenswrapper[4714]: I0129 16:16:27.479204 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gkpz" event={"ID":"04dba3a0-a89b-48c5-97ef-e5660d1ae7bb","Type":"ContainerStarted","Data":"b49151a6cd20c5c170981578deb8f14550600f3f56ff6e1d1159c6320fbd9d56"} Jan 29 16:16:27 crc kubenswrapper[4714]: I0129 16:16:27.529659 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-knxc8"] Jan 29 16:16:27 crc kubenswrapper[4714]: I0129 16:16:27.844830 4714 patch_prober.go:28] interesting pod/machine-config-daemon-ppngk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:16:27 crc kubenswrapper[4714]: I0129 16:16:27.845262 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:16:28 crc kubenswrapper[4714]: I0129 16:16:28.483724 4714 generic.go:334] "Generic (PLEG): container finished" podID="de6c9fbd-8657-4434-bff5-468276791466" containerID="f21d6daee17eb6d985ca58354495191425d749da3fa45be5996908ac6f08780f" exitCode=0 Jan 29 16:16:28 crc kubenswrapper[4714]: I0129 16:16:28.483828 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knxc8" event={"ID":"de6c9fbd-8657-4434-bff5-468276791466","Type":"ContainerDied","Data":"f21d6daee17eb6d985ca58354495191425d749da3fa45be5996908ac6f08780f"} Jan 29 16:16:28 crc kubenswrapper[4714]: I0129 16:16:28.483871 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knxc8" event={"ID":"de6c9fbd-8657-4434-bff5-468276791466","Type":"ContainerStarted","Data":"29b015d32919088e769aeed20bae467711af5bb04181697f47d676a18c6bd238"} Jan 29 16:16:28 crc kubenswrapper[4714]: I0129 16:16:28.486123 4714 generic.go:334] "Generic (PLEG): container finished" podID="04dba3a0-a89b-48c5-97ef-e5660d1ae7bb" containerID="d0f6caf51e1515d83dc6f892c64af223c43e57b8b5ed10f67a693edcb05bd71e" exitCode=0 Jan 29 16:16:28 crc kubenswrapper[4714]: I0129 16:16:28.486164 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gkpz" event={"ID":"04dba3a0-a89b-48c5-97ef-e5660d1ae7bb","Type":"ContainerDied","Data":"d0f6caf51e1515d83dc6f892c64af223c43e57b8b5ed10f67a693edcb05bd71e"} Jan 29 16:16:28 crc kubenswrapper[4714]: I0129 16:16:28.963785 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-slsxz"] Jan 29 16:16:28 crc kubenswrapper[4714]: I0129 16:16:28.965961 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-slsxz" Jan 29 16:16:28 crc kubenswrapper[4714]: I0129 16:16:28.969975 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 16:16:28 crc kubenswrapper[4714]: I0129 16:16:28.973688 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-slsxz"] Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.047689 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16cb244c-6c63-47e6-a312-ba33ab4d4899-catalog-content\") pod \"certified-operators-slsxz\" (UID: \"16cb244c-6c63-47e6-a312-ba33ab4d4899\") " pod="openshift-marketplace/certified-operators-slsxz" Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.047728 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsgc7\" (UniqueName: \"kubernetes.io/projected/16cb244c-6c63-47e6-a312-ba33ab4d4899-kube-api-access-jsgc7\") pod \"certified-operators-slsxz\" (UID: \"16cb244c-6c63-47e6-a312-ba33ab4d4899\") " pod="openshift-marketplace/certified-operators-slsxz" Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.047760 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16cb244c-6c63-47e6-a312-ba33ab4d4899-utilities\") pod \"certified-operators-slsxz\" (UID: \"16cb244c-6c63-47e6-a312-ba33ab4d4899\") " pod="openshift-marketplace/certified-operators-slsxz" Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.149266 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16cb244c-6c63-47e6-a312-ba33ab4d4899-catalog-content\") pod \"certified-operators-slsxz\" (UID: \"16cb244c-6c63-47e6-a312-ba33ab4d4899\") " pod="openshift-marketplace/certified-operators-slsxz" Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.149625 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsgc7\" (UniqueName: \"kubernetes.io/projected/16cb244c-6c63-47e6-a312-ba33ab4d4899-kube-api-access-jsgc7\") pod \"certified-operators-slsxz\" (UID: \"16cb244c-6c63-47e6-a312-ba33ab4d4899\") " pod="openshift-marketplace/certified-operators-slsxz" Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.149674 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16cb244c-6c63-47e6-a312-ba33ab4d4899-utilities\") pod \"certified-operators-slsxz\" (UID: \"16cb244c-6c63-47e6-a312-ba33ab4d4899\") " pod="openshift-marketplace/certified-operators-slsxz" Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.150369 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16cb244c-6c63-47e6-a312-ba33ab4d4899-utilities\") pod \"certified-operators-slsxz\" (UID: \"16cb244c-6c63-47e6-a312-ba33ab4d4899\") " pod="openshift-marketplace/certified-operators-slsxz" Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.151269 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16cb244c-6c63-47e6-a312-ba33ab4d4899-catalog-content\") pod \"certified-operators-slsxz\" (UID: \"16cb244c-6c63-47e6-a312-ba33ab4d4899\") " pod="openshift-marketplace/certified-operators-slsxz" Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.168401 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ndx6p"] Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.170011 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ndx6p" Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.172294 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.178807 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsgc7\" (UniqueName: \"kubernetes.io/projected/16cb244c-6c63-47e6-a312-ba33ab4d4899-kube-api-access-jsgc7\") pod \"certified-operators-slsxz\" (UID: \"16cb244c-6c63-47e6-a312-ba33ab4d4899\") " pod="openshift-marketplace/certified-operators-slsxz" Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.190980 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ndx6p"] Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.251065 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnd9z\" (UniqueName: \"kubernetes.io/projected/ca655e22-8f97-4e9e-b115-734ae1af7d50-kube-api-access-qnd9z\") pod \"community-operators-ndx6p\" (UID: \"ca655e22-8f97-4e9e-b115-734ae1af7d50\") " pod="openshift-marketplace/community-operators-ndx6p" Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.251116 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca655e22-8f97-4e9e-b115-734ae1af7d50-catalog-content\") pod \"community-operators-ndx6p\" (UID: \"ca655e22-8f97-4e9e-b115-734ae1af7d50\") " pod="openshift-marketplace/community-operators-ndx6p" Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.251140 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca655e22-8f97-4e9e-b115-734ae1af7d50-utilities\") pod \"community-operators-ndx6p\" (UID: \"ca655e22-8f97-4e9e-b115-734ae1af7d50\") " pod="openshift-marketplace/community-operators-ndx6p" Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.322050 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-slsxz" Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.352014 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnd9z\" (UniqueName: \"kubernetes.io/projected/ca655e22-8f97-4e9e-b115-734ae1af7d50-kube-api-access-qnd9z\") pod \"community-operators-ndx6p\" (UID: \"ca655e22-8f97-4e9e-b115-734ae1af7d50\") " pod="openshift-marketplace/community-operators-ndx6p" Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.352078 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca655e22-8f97-4e9e-b115-734ae1af7d50-catalog-content\") pod \"community-operators-ndx6p\" (UID: \"ca655e22-8f97-4e9e-b115-734ae1af7d50\") " pod="openshift-marketplace/community-operators-ndx6p" Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.352104 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca655e22-8f97-4e9e-b115-734ae1af7d50-utilities\") pod \"community-operators-ndx6p\" (UID: \"ca655e22-8f97-4e9e-b115-734ae1af7d50\") " pod="openshift-marketplace/community-operators-ndx6p" Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.352487 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca655e22-8f97-4e9e-b115-734ae1af7d50-utilities\") pod \"community-operators-ndx6p\" (UID: \"ca655e22-8f97-4e9e-b115-734ae1af7d50\") " pod="openshift-marketplace/community-operators-ndx6p" Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.353466 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca655e22-8f97-4e9e-b115-734ae1af7d50-catalog-content\") pod \"community-operators-ndx6p\" (UID: \"ca655e22-8f97-4e9e-b115-734ae1af7d50\") " pod="openshift-marketplace/community-operators-ndx6p" Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.372744 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnd9z\" (UniqueName: \"kubernetes.io/projected/ca655e22-8f97-4e9e-b115-734ae1af7d50-kube-api-access-qnd9z\") pod \"community-operators-ndx6p\" (UID: \"ca655e22-8f97-4e9e-b115-734ae1af7d50\") " pod="openshift-marketplace/community-operators-ndx6p" Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.502433 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gkpz" event={"ID":"04dba3a0-a89b-48c5-97ef-e5660d1ae7bb","Type":"ContainerStarted","Data":"35fc0ccefdc74ae4d4ac8a83e158c0a04ec3c4c8190a9cb3d91d2dd6bb5fe65e"} Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.512052 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ndx6p" Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.526569 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6gkpz" podStartSLOduration=2.09524823 podStartE2EDuration="3.526552168s" podCreationTimestamp="2026-01-29 16:16:26 +0000 UTC" firstStartedPulling="2026-01-29 16:16:27.480421539 +0000 UTC m=+394.000922659" lastFinishedPulling="2026-01-29 16:16:28.911725477 +0000 UTC m=+395.432226597" observedRunningTime="2026-01-29 16:16:29.52419837 +0000 UTC m=+396.044699490" watchObservedRunningTime="2026-01-29 16:16:29.526552168 +0000 UTC m=+396.047053288" Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.536488 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-slsxz"] Jan 29 16:16:29 crc kubenswrapper[4714]: W0129 16:16:29.545649 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16cb244c_6c63_47e6_a312_ba33ab4d4899.slice/crio-884b32749714c8c7d5a95398be26c6c16b6bc585a925ce02a67b3f4c885cba1c WatchSource:0}: Error finding container 884b32749714c8c7d5a95398be26c6c16b6bc585a925ce02a67b3f4c885cba1c: Status 404 returned error can't find the container with id 884b32749714c8c7d5a95398be26c6c16b6bc585a925ce02a67b3f4c885cba1c Jan 29 16:16:29 crc kubenswrapper[4714]: I0129 16:16:29.916848 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ndx6p"] Jan 29 16:16:29 crc kubenswrapper[4714]: W0129 16:16:29.923094 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca655e22_8f97_4e9e_b115_734ae1af7d50.slice/crio-756fbfa36039add55873032181de3423d6d3224932100855a9a3c1eb47efd1fc WatchSource:0}: Error finding container 756fbfa36039add55873032181de3423d6d3224932100855a9a3c1eb47efd1fc: Status 404 returned error can't find the container with id 756fbfa36039add55873032181de3423d6d3224932100855a9a3c1eb47efd1fc Jan 29 16:16:30 crc kubenswrapper[4714]: I0129 16:16:30.053897 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-xrw9s" Jan 29 16:16:30 crc kubenswrapper[4714]: I0129 16:16:30.119949 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gnjmm"] Jan 29 16:16:30 crc kubenswrapper[4714]: I0129 16:16:30.509281 4714 generic.go:334] "Generic (PLEG): container finished" podID="ca655e22-8f97-4e9e-b115-734ae1af7d50" containerID="9a954353aef73625a94aef15b9031dd32efca16dc8803b36934b6c950db5a96c" exitCode=0 Jan 29 16:16:30 crc kubenswrapper[4714]: I0129 16:16:30.509331 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ndx6p" event={"ID":"ca655e22-8f97-4e9e-b115-734ae1af7d50","Type":"ContainerDied","Data":"9a954353aef73625a94aef15b9031dd32efca16dc8803b36934b6c950db5a96c"} Jan 29 16:16:30 crc kubenswrapper[4714]: I0129 16:16:30.509377 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ndx6p" event={"ID":"ca655e22-8f97-4e9e-b115-734ae1af7d50","Type":"ContainerStarted","Data":"756fbfa36039add55873032181de3423d6d3224932100855a9a3c1eb47efd1fc"} Jan 29 16:16:30 crc kubenswrapper[4714]: I0129 16:16:30.511194 4714 generic.go:334] "Generic (PLEG): container finished" podID="16cb244c-6c63-47e6-a312-ba33ab4d4899" containerID="62d59d1936ff31d8ea6d4c8d7ff82c3652ecb0ea4515c3877dfee41f17b9ee07" exitCode=0 Jan 29 16:16:30 crc kubenswrapper[4714]: I0129 16:16:30.511267 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-slsxz" event={"ID":"16cb244c-6c63-47e6-a312-ba33ab4d4899","Type":"ContainerDied","Data":"62d59d1936ff31d8ea6d4c8d7ff82c3652ecb0ea4515c3877dfee41f17b9ee07"} Jan 29 16:16:30 crc kubenswrapper[4714]: I0129 16:16:30.511300 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-slsxz" event={"ID":"16cb244c-6c63-47e6-a312-ba33ab4d4899","Type":"ContainerStarted","Data":"884b32749714c8c7d5a95398be26c6c16b6bc585a925ce02a67b3f4c885cba1c"} Jan 29 16:16:30 crc kubenswrapper[4714]: I0129 16:16:30.514266 4714 generic.go:334] "Generic (PLEG): container finished" podID="de6c9fbd-8657-4434-bff5-468276791466" containerID="f26a96b15ed747c0a4867efe91365ae7ff513a3b7a8ae9f0e7807a4e2f772cdc" exitCode=0 Jan 29 16:16:30 crc kubenswrapper[4714]: I0129 16:16:30.514319 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knxc8" event={"ID":"de6c9fbd-8657-4434-bff5-468276791466","Type":"ContainerDied","Data":"f26a96b15ed747c0a4867efe91365ae7ff513a3b7a8ae9f0e7807a4e2f772cdc"} Jan 29 16:16:31 crc kubenswrapper[4714]: I0129 16:16:31.521186 4714 generic.go:334] "Generic (PLEG): container finished" podID="ca655e22-8f97-4e9e-b115-734ae1af7d50" containerID="7c8138352f6edb954a09ba035a20f9a65c4b7052c054970be09c397de55273a1" exitCode=0 Jan 29 16:16:31 crc kubenswrapper[4714]: I0129 16:16:31.521705 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ndx6p" event={"ID":"ca655e22-8f97-4e9e-b115-734ae1af7d50","Type":"ContainerDied","Data":"7c8138352f6edb954a09ba035a20f9a65c4b7052c054970be09c397de55273a1"} Jan 29 16:16:31 crc kubenswrapper[4714]: I0129 16:16:31.524339 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-slsxz" event={"ID":"16cb244c-6c63-47e6-a312-ba33ab4d4899","Type":"ContainerStarted","Data":"3bc5b72a0a9952ffe2ca554b13a97237dd7b15673e19bc821cf21f1143b8ae85"} Jan 29 16:16:31 crc kubenswrapper[4714]: I0129 16:16:31.528462 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knxc8" event={"ID":"de6c9fbd-8657-4434-bff5-468276791466","Type":"ContainerStarted","Data":"3649c308dddf3d4460709e70403d1db1343f0378f932455c872f88fc66376358"} Jan 29 16:16:31 crc kubenswrapper[4714]: I0129 16:16:31.561511 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-knxc8" podStartSLOduration=3.064075607 podStartE2EDuration="5.56148611s" podCreationTimestamp="2026-01-29 16:16:26 +0000 UTC" firstStartedPulling="2026-01-29 16:16:28.486003545 +0000 UTC m=+395.006504665" lastFinishedPulling="2026-01-29 16:16:30.983414048 +0000 UTC m=+397.503915168" observedRunningTime="2026-01-29 16:16:31.560590964 +0000 UTC m=+398.081092104" watchObservedRunningTime="2026-01-29 16:16:31.56148611 +0000 UTC m=+398.081987230" Jan 29 16:16:32 crc kubenswrapper[4714]: I0129 16:16:32.535489 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ndx6p" event={"ID":"ca655e22-8f97-4e9e-b115-734ae1af7d50","Type":"ContainerStarted","Data":"a9c4299a993eca4f0aa5c07550820a1ed2d795a2558e78a49fa48d90534cf65d"} Jan 29 16:16:32 crc kubenswrapper[4714]: I0129 16:16:32.537443 4714 generic.go:334] "Generic (PLEG): container finished" podID="16cb244c-6c63-47e6-a312-ba33ab4d4899" containerID="3bc5b72a0a9952ffe2ca554b13a97237dd7b15673e19bc821cf21f1143b8ae85" exitCode=0 Jan 29 16:16:32 crc kubenswrapper[4714]: I0129 16:16:32.537517 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-slsxz" event={"ID":"16cb244c-6c63-47e6-a312-ba33ab4d4899","Type":"ContainerDied","Data":"3bc5b72a0a9952ffe2ca554b13a97237dd7b15673e19bc821cf21f1143b8ae85"} Jan 29 16:16:32 crc kubenswrapper[4714]: I0129 16:16:32.537580 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-slsxz" event={"ID":"16cb244c-6c63-47e6-a312-ba33ab4d4899","Type":"ContainerStarted","Data":"8cebb6abc380ded14169355538a0129fc12d3b96fafd58e18998a7ab9b5ba7ab"} Jan 29 16:16:32 crc kubenswrapper[4714]: I0129 16:16:32.553161 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ndx6p" podStartSLOduration=2.144912214 podStartE2EDuration="3.553140611s" podCreationTimestamp="2026-01-29 16:16:29 +0000 UTC" firstStartedPulling="2026-01-29 16:16:30.510974837 +0000 UTC m=+397.031475957" lastFinishedPulling="2026-01-29 16:16:31.919203224 +0000 UTC m=+398.439704354" observedRunningTime="2026-01-29 16:16:32.550543635 +0000 UTC m=+399.071044755" watchObservedRunningTime="2026-01-29 16:16:32.553140611 +0000 UTC m=+399.073641731" Jan 29 16:16:32 crc kubenswrapper[4714]: I0129 16:16:32.577754 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-slsxz" podStartSLOduration=3.148554021 podStartE2EDuration="4.577731336s" podCreationTimestamp="2026-01-29 16:16:28 +0000 UTC" firstStartedPulling="2026-01-29 16:16:30.51277732 +0000 UTC m=+397.033278440" lastFinishedPulling="2026-01-29 16:16:31.941954595 +0000 UTC m=+398.462455755" observedRunningTime="2026-01-29 16:16:32.574538673 +0000 UTC m=+399.095039793" watchObservedRunningTime="2026-01-29 16:16:32.577731336 +0000 UTC m=+399.098232456" Jan 29 16:16:36 crc kubenswrapper[4714]: I0129 16:16:36.917555 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6gkpz" Jan 29 16:16:36 crc kubenswrapper[4714]: I0129 16:16:36.918239 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6gkpz" Jan 29 16:16:36 crc kubenswrapper[4714]: I0129 16:16:36.959472 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6gkpz" Jan 29 16:16:37 crc kubenswrapper[4714]: I0129 16:16:37.090524 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-knxc8" Jan 29 16:16:37 crc kubenswrapper[4714]: I0129 16:16:37.090590 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-knxc8" Jan 29 16:16:37 crc kubenswrapper[4714]: I0129 16:16:37.626222 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6gkpz" Jan 29 16:16:38 crc kubenswrapper[4714]: I0129 16:16:38.143942 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-knxc8" podUID="de6c9fbd-8657-4434-bff5-468276791466" containerName="registry-server" probeResult="failure" output=< Jan 29 16:16:38 crc kubenswrapper[4714]: timeout: failed to connect service ":50051" within 1s Jan 29 16:16:38 crc kubenswrapper[4714]: > Jan 29 16:16:39 crc kubenswrapper[4714]: I0129 16:16:39.322922 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-slsxz" Jan 29 16:16:39 crc kubenswrapper[4714]: I0129 16:16:39.323001 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-slsxz" Jan 29 16:16:39 crc kubenswrapper[4714]: I0129 16:16:39.364485 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-slsxz" Jan 29 16:16:39 crc kubenswrapper[4714]: I0129 16:16:39.512872 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ndx6p" Jan 29 16:16:39 crc kubenswrapper[4714]: I0129 16:16:39.512923 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ndx6p" Jan 29 16:16:39 crc kubenswrapper[4714]: I0129 16:16:39.562944 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ndx6p" Jan 29 16:16:39 crc kubenswrapper[4714]: I0129 16:16:39.618783 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ndx6p" Jan 29 16:16:39 crc kubenswrapper[4714]: I0129 16:16:39.619951 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-slsxz" Jan 29 16:16:47 crc kubenswrapper[4714]: I0129 16:16:47.153122 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-knxc8" Jan 29 16:16:47 crc kubenswrapper[4714]: I0129 16:16:47.217000 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-knxc8" Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.170112 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" podUID="48be8ad8-4c02-4bea-a143-449763b39d54" containerName="registry" containerID="cri-o://f2810d7fe0cd9711b0f9c4dd754cc2bb22760a85d91d92553df7afe7b8378eac" gracePeriod=30 Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.567553 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.660022 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48be8ad8-4c02-4bea-a143-449763b39d54-trusted-ca\") pod \"48be8ad8-4c02-4bea-a143-449763b39d54\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.660196 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/48be8ad8-4c02-4bea-a143-449763b39d54-registry-tls\") pod \"48be8ad8-4c02-4bea-a143-449763b39d54\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.660350 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/48be8ad8-4c02-4bea-a143-449763b39d54-registry-certificates\") pod \"48be8ad8-4c02-4bea-a143-449763b39d54\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.660424 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/48be8ad8-4c02-4bea-a143-449763b39d54-ca-trust-extracted\") pod \"48be8ad8-4c02-4bea-a143-449763b39d54\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.660811 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"48be8ad8-4c02-4bea-a143-449763b39d54\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.660913 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/48be8ad8-4c02-4bea-a143-449763b39d54-installation-pull-secrets\") pod \"48be8ad8-4c02-4bea-a143-449763b39d54\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.661050 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/48be8ad8-4c02-4bea-a143-449763b39d54-bound-sa-token\") pod \"48be8ad8-4c02-4bea-a143-449763b39d54\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.661346 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh5km\" (UniqueName: \"kubernetes.io/projected/48be8ad8-4c02-4bea-a143-449763b39d54-kube-api-access-wh5km\") pod \"48be8ad8-4c02-4bea-a143-449763b39d54\" (UID: \"48be8ad8-4c02-4bea-a143-449763b39d54\") " Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.663290 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48be8ad8-4c02-4bea-a143-449763b39d54-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "48be8ad8-4c02-4bea-a143-449763b39d54" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.663389 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48be8ad8-4c02-4bea-a143-449763b39d54-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "48be8ad8-4c02-4bea-a143-449763b39d54" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.664095 4714 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48be8ad8-4c02-4bea-a143-449763b39d54-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.664131 4714 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/48be8ad8-4c02-4bea-a143-449763b39d54-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.667017 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48be8ad8-4c02-4bea-a143-449763b39d54-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "48be8ad8-4c02-4bea-a143-449763b39d54" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.670246 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48be8ad8-4c02-4bea-a143-449763b39d54-kube-api-access-wh5km" (OuterVolumeSpecName: "kube-api-access-wh5km") pod "48be8ad8-4c02-4bea-a143-449763b39d54" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54"). InnerVolumeSpecName "kube-api-access-wh5km". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.674066 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48be8ad8-4c02-4bea-a143-449763b39d54-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "48be8ad8-4c02-4bea-a143-449763b39d54" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.674188 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48be8ad8-4c02-4bea-a143-449763b39d54-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "48be8ad8-4c02-4bea-a143-449763b39d54" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.675060 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "48be8ad8-4c02-4bea-a143-449763b39d54" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.675767 4714 generic.go:334] "Generic (PLEG): container finished" podID="48be8ad8-4c02-4bea-a143-449763b39d54" containerID="f2810d7fe0cd9711b0f9c4dd754cc2bb22760a85d91d92553df7afe7b8378eac" exitCode=0 Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.675815 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" event={"ID":"48be8ad8-4c02-4bea-a143-449763b39d54","Type":"ContainerDied","Data":"f2810d7fe0cd9711b0f9c4dd754cc2bb22760a85d91d92553df7afe7b8378eac"} Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.675849 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" event={"ID":"48be8ad8-4c02-4bea-a143-449763b39d54","Type":"ContainerDied","Data":"815b16152db25222f3f6a5ff40233d8cdbe464e73d20d130a327746193531954"} Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.675872 4714 scope.go:117] "RemoveContainer" containerID="f2810d7fe0cd9711b0f9c4dd754cc2bb22760a85d91d92553df7afe7b8378eac" Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.676038 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-gnjmm" Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.696237 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48be8ad8-4c02-4bea-a143-449763b39d54-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "48be8ad8-4c02-4bea-a143-449763b39d54" (UID: "48be8ad8-4c02-4bea-a143-449763b39d54"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.714424 4714 scope.go:117] "RemoveContainer" containerID="f2810d7fe0cd9711b0f9c4dd754cc2bb22760a85d91d92553df7afe7b8378eac" Jan 29 16:16:55 crc kubenswrapper[4714]: E0129 16:16:55.714999 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2810d7fe0cd9711b0f9c4dd754cc2bb22760a85d91d92553df7afe7b8378eac\": container with ID starting with f2810d7fe0cd9711b0f9c4dd754cc2bb22760a85d91d92553df7afe7b8378eac not found: ID does not exist" containerID="f2810d7fe0cd9711b0f9c4dd754cc2bb22760a85d91d92553df7afe7b8378eac" Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.715034 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2810d7fe0cd9711b0f9c4dd754cc2bb22760a85d91d92553df7afe7b8378eac"} err="failed to get container status \"f2810d7fe0cd9711b0f9c4dd754cc2bb22760a85d91d92553df7afe7b8378eac\": rpc error: code = NotFound desc = could not find container \"f2810d7fe0cd9711b0f9c4dd754cc2bb22760a85d91d92553df7afe7b8378eac\": container with ID starting with f2810d7fe0cd9711b0f9c4dd754cc2bb22760a85d91d92553df7afe7b8378eac not found: ID does not exist" Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.765699 4714 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/48be8ad8-4c02-4bea-a143-449763b39d54-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.765760 4714 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/48be8ad8-4c02-4bea-a143-449763b39d54-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.765781 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh5km\" (UniqueName: \"kubernetes.io/projected/48be8ad8-4c02-4bea-a143-449763b39d54-kube-api-access-wh5km\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.765798 4714 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/48be8ad8-4c02-4bea-a143-449763b39d54-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:55 crc kubenswrapper[4714]: I0129 16:16:55.765817 4714 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/48be8ad8-4c02-4bea-a143-449763b39d54-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:56 crc kubenswrapper[4714]: I0129 16:16:56.036190 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gnjmm"] Jan 29 16:16:56 crc kubenswrapper[4714]: I0129 16:16:56.042903 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gnjmm"] Jan 29 16:16:56 crc kubenswrapper[4714]: I0129 16:16:56.195530 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48be8ad8-4c02-4bea-a143-449763b39d54" path="/var/lib/kubelet/pods/48be8ad8-4c02-4bea-a143-449763b39d54/volumes" Jan 29 16:16:57 crc kubenswrapper[4714]: I0129 16:16:57.853070 4714 patch_prober.go:28] interesting pod/machine-config-daemon-ppngk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:16:57 crc kubenswrapper[4714]: I0129 16:16:57.853169 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:16:57 crc kubenswrapper[4714]: I0129 16:16:57.853239 4714 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" Jan 29 16:16:57 crc kubenswrapper[4714]: I0129 16:16:57.854139 4714 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"32d59d2a4eb095db60ac3365411265035b47b1f01d164e950c86daa5aecb2792"} pod="openshift-machine-config-operator/machine-config-daemon-ppngk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:16:57 crc kubenswrapper[4714]: I0129 16:16:57.854247 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" containerID="cri-o://32d59d2a4eb095db60ac3365411265035b47b1f01d164e950c86daa5aecb2792" gracePeriod=600 Jan 29 16:16:58 crc kubenswrapper[4714]: I0129 16:16:58.708020 4714 generic.go:334] "Generic (PLEG): container finished" podID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerID="32d59d2a4eb095db60ac3365411265035b47b1f01d164e950c86daa5aecb2792" exitCode=0 Jan 29 16:16:58 crc kubenswrapper[4714]: I0129 16:16:58.708104 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" event={"ID":"c8c765f3-89eb-4077-8829-03e86eb0c90c","Type":"ContainerDied","Data":"32d59d2a4eb095db60ac3365411265035b47b1f01d164e950c86daa5aecb2792"} Jan 29 16:16:58 crc kubenswrapper[4714]: I0129 16:16:58.708840 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" event={"ID":"c8c765f3-89eb-4077-8829-03e86eb0c90c","Type":"ContainerStarted","Data":"aeda778ca6de188bfb9f09408c5d355e6f8d4366d5f9ebe7bfd9f2e4dea2a0e4"} Jan 29 16:16:58 crc kubenswrapper[4714]: I0129 16:16:58.708907 4714 scope.go:117] "RemoveContainer" containerID="27cd83775817b7c8fd45f33899dd9a718067500e7a4853c38451161035fd33e5" Jan 29 16:19:27 crc kubenswrapper[4714]: I0129 16:19:27.844626 4714 patch_prober.go:28] interesting pod/machine-config-daemon-ppngk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:19:27 crc kubenswrapper[4714]: I0129 16:19:27.845174 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:19:57 crc kubenswrapper[4714]: I0129 16:19:57.844058 4714 patch_prober.go:28] interesting pod/machine-config-daemon-ppngk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:19:57 crc kubenswrapper[4714]: I0129 16:19:57.844882 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:20:27 crc kubenswrapper[4714]: I0129 16:20:27.844453 4714 patch_prober.go:28] interesting pod/machine-config-daemon-ppngk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:20:27 crc kubenswrapper[4714]: I0129 16:20:27.844904 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:20:27 crc kubenswrapper[4714]: I0129 16:20:27.844978 4714 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" Jan 29 16:20:27 crc kubenswrapper[4714]: I0129 16:20:27.845585 4714 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aeda778ca6de188bfb9f09408c5d355e6f8d4366d5f9ebe7bfd9f2e4dea2a0e4"} pod="openshift-machine-config-operator/machine-config-daemon-ppngk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:20:27 crc kubenswrapper[4714]: I0129 16:20:27.845649 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" containerID="cri-o://aeda778ca6de188bfb9f09408c5d355e6f8d4366d5f9ebe7bfd9f2e4dea2a0e4" gracePeriod=600 Jan 29 16:20:28 crc kubenswrapper[4714]: I0129 16:20:28.341451 4714 generic.go:334] "Generic (PLEG): container finished" podID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerID="aeda778ca6de188bfb9f09408c5d355e6f8d4366d5f9ebe7bfd9f2e4dea2a0e4" exitCode=0 Jan 29 16:20:28 crc kubenswrapper[4714]: I0129 16:20:28.341535 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" event={"ID":"c8c765f3-89eb-4077-8829-03e86eb0c90c","Type":"ContainerDied","Data":"aeda778ca6de188bfb9f09408c5d355e6f8d4366d5f9ebe7bfd9f2e4dea2a0e4"} Jan 29 16:20:28 crc kubenswrapper[4714]: I0129 16:20:28.342061 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" event={"ID":"c8c765f3-89eb-4077-8829-03e86eb0c90c","Type":"ContainerStarted","Data":"434181b332ad91829c9ca3b07c475cac7d3c8b013492e90ce07fd88776d24efa"} Jan 29 16:20:28 crc kubenswrapper[4714]: I0129 16:20:28.342098 4714 scope.go:117] "RemoveContainer" containerID="32d59d2a4eb095db60ac3365411265035b47b1f01d164e950c86daa5aecb2792" Jan 29 16:21:48 crc kubenswrapper[4714]: I0129 16:21:48.800121 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sbnkt"] Jan 29 16:21:48 crc kubenswrapper[4714]: I0129 16:21:48.801221 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="ovn-controller" containerID="cri-o://7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016" gracePeriod=30 Jan 29 16:21:48 crc kubenswrapper[4714]: I0129 16:21:48.801612 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="sbdb" containerID="cri-o://662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf" gracePeriod=30 Jan 29 16:21:48 crc kubenswrapper[4714]: I0129 16:21:48.801657 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="nbdb" containerID="cri-o://5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7" gracePeriod=30 Jan 29 16:21:48 crc kubenswrapper[4714]: I0129 16:21:48.801695 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="northd" containerID="cri-o://e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4" gracePeriod=30 Jan 29 16:21:48 crc kubenswrapper[4714]: I0129 16:21:48.801750 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816" gracePeriod=30 Jan 29 16:21:48 crc kubenswrapper[4714]: I0129 16:21:48.801828 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="kube-rbac-proxy-node" containerID="cri-o://b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc" gracePeriod=30 Jan 29 16:21:48 crc kubenswrapper[4714]: I0129 16:21:48.801870 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="ovn-acl-logging" containerID="cri-o://2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd" gracePeriod=30 Jan 29 16:21:48 crc kubenswrapper[4714]: I0129 16:21:48.836364 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="ovnkube-controller" containerID="cri-o://00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6" gracePeriod=30 Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.722074 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sbnkt_04b20f02-6c1e-4082-8233-8f06bda63195/ovnkube-controller/3.log" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.725477 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sbnkt_04b20f02-6c1e-4082-8233-8f06bda63195/ovn-acl-logging/0.log" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.726181 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sbnkt_04b20f02-6c1e-4082-8233-8f06bda63195/ovn-controller/0.log" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.726919 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.795741 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-skzvq"] Jan 29 16:21:49 crc kubenswrapper[4714]: E0129 16:21:49.796003 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.796021 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 16:21:49 crc kubenswrapper[4714]: E0129 16:21:49.796032 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48be8ad8-4c02-4bea-a143-449763b39d54" containerName="registry" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.796041 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="48be8ad8-4c02-4bea-a143-449763b39d54" containerName="registry" Jan 29 16:21:49 crc kubenswrapper[4714]: E0129 16:21:49.796051 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="kubecfg-setup" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.796058 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="kubecfg-setup" Jan 29 16:21:49 crc kubenswrapper[4714]: E0129 16:21:49.796066 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="sbdb" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.796074 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="sbdb" Jan 29 16:21:49 crc kubenswrapper[4714]: E0129 16:21:49.796084 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="northd" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.796091 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="northd" Jan 29 16:21:49 crc kubenswrapper[4714]: E0129 16:21:49.796103 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="ovnkube-controller" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.796110 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="ovnkube-controller" Jan 29 16:21:49 crc kubenswrapper[4714]: E0129 16:21:49.796119 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="nbdb" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.796128 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="nbdb" Jan 29 16:21:49 crc kubenswrapper[4714]: E0129 16:21:49.796136 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="ovnkube-controller" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.796143 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="ovnkube-controller" Jan 29 16:21:49 crc kubenswrapper[4714]: E0129 16:21:49.796157 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="kube-rbac-proxy-node" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.796165 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="kube-rbac-proxy-node" Jan 29 16:21:49 crc kubenswrapper[4714]: E0129 16:21:49.796174 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="ovnkube-controller" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.796182 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="ovnkube-controller" Jan 29 16:21:49 crc kubenswrapper[4714]: E0129 16:21:49.796192 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="ovnkube-controller" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.796198 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="ovnkube-controller" Jan 29 16:21:49 crc kubenswrapper[4714]: E0129 16:21:49.796206 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="ovn-acl-logging" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.796212 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="ovn-acl-logging" Jan 29 16:21:49 crc kubenswrapper[4714]: E0129 16:21:49.796222 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="ovn-controller" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.796228 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="ovn-controller" Jan 29 16:21:49 crc kubenswrapper[4714]: E0129 16:21:49.796236 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="ovnkube-controller" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.796241 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="ovnkube-controller" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.796320 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="ovnkube-controller" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.796329 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.796337 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="ovnkube-controller" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.796346 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="48be8ad8-4c02-4bea-a143-449763b39d54" containerName="registry" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.796353 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="nbdb" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.796360 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="kube-rbac-proxy-node" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.796367 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="ovn-controller" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.796375 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="sbdb" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.796381 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="ovn-acl-logging" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.796389 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="northd" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.796396 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="ovnkube-controller" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.796543 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="ovnkube-controller" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.796553 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" containerName="ovnkube-controller" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.798003 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.886490 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-run-ovn\") pod \"04b20f02-6c1e-4082-8233-8f06bda63195\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.886546 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-var-lib-cni-networks-ovn-kubernetes\") pod \"04b20f02-6c1e-4082-8233-8f06bda63195\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.886558 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "04b20f02-6c1e-4082-8233-8f06bda63195" (UID: "04b20f02-6c1e-4082-8233-8f06bda63195"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.886579 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-cni-netd\") pod \"04b20f02-6c1e-4082-8233-8f06bda63195\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.886609 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "04b20f02-6c1e-4082-8233-8f06bda63195" (UID: "04b20f02-6c1e-4082-8233-8f06bda63195"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.886610 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-var-lib-openvswitch\") pod \"04b20f02-6c1e-4082-8233-8f06bda63195\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.886660 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-slash\") pod \"04b20f02-6c1e-4082-8233-8f06bda63195\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.886665 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "04b20f02-6c1e-4082-8233-8f06bda63195" (UID: "04b20f02-6c1e-4082-8233-8f06bda63195"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.886688 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-run-netns\") pod \"04b20f02-6c1e-4082-8233-8f06bda63195\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.886714 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-run-ovn-kubernetes\") pod \"04b20f02-6c1e-4082-8233-8f06bda63195\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.886767 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vrsm\" (UniqueName: \"kubernetes.io/projected/04b20f02-6c1e-4082-8233-8f06bda63195-kube-api-access-7vrsm\") pod \"04b20f02-6c1e-4082-8233-8f06bda63195\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.886717 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-slash" (OuterVolumeSpecName: "host-slash") pod "04b20f02-6c1e-4082-8233-8f06bda63195" (UID: "04b20f02-6c1e-4082-8233-8f06bda63195"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.886792 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/04b20f02-6c1e-4082-8233-8f06bda63195-ovn-node-metrics-cert\") pod \"04b20f02-6c1e-4082-8233-8f06bda63195\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.886706 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "04b20f02-6c1e-4082-8233-8f06bda63195" (UID: "04b20f02-6c1e-4082-8233-8f06bda63195"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.886814 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-systemd-units\") pod \"04b20f02-6c1e-4082-8233-8f06bda63195\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.886836 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-kubelet\") pod \"04b20f02-6c1e-4082-8233-8f06bda63195\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.886856 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-node-log\") pod \"04b20f02-6c1e-4082-8233-8f06bda63195\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.886874 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/04b20f02-6c1e-4082-8233-8f06bda63195-ovnkube-config\") pod \"04b20f02-6c1e-4082-8233-8f06bda63195\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.886889 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-run-openvswitch\") pod \"04b20f02-6c1e-4082-8233-8f06bda63195\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.886761 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "04b20f02-6c1e-4082-8233-8f06bda63195" (UID: "04b20f02-6c1e-4082-8233-8f06bda63195"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.886770 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "04b20f02-6c1e-4082-8233-8f06bda63195" (UID: "04b20f02-6c1e-4082-8233-8f06bda63195"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.886853 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "04b20f02-6c1e-4082-8233-8f06bda63195" (UID: "04b20f02-6c1e-4082-8233-8f06bda63195"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.886890 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "04b20f02-6c1e-4082-8233-8f06bda63195" (UID: "04b20f02-6c1e-4082-8233-8f06bda63195"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.886957 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-log-socket" (OuterVolumeSpecName: "log-socket") pod "04b20f02-6c1e-4082-8233-8f06bda63195" (UID: "04b20f02-6c1e-4082-8233-8f06bda63195"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.886908 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-log-socket\") pod \"04b20f02-6c1e-4082-8233-8f06bda63195\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.886981 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-node-log" (OuterVolumeSpecName: "node-log") pod "04b20f02-6c1e-4082-8233-8f06bda63195" (UID: "04b20f02-6c1e-4082-8233-8f06bda63195"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887020 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/04b20f02-6c1e-4082-8233-8f06bda63195-ovnkube-script-lib\") pod \"04b20f02-6c1e-4082-8233-8f06bda63195\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887060 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/04b20f02-6c1e-4082-8233-8f06bda63195-env-overrides\") pod \"04b20f02-6c1e-4082-8233-8f06bda63195\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887091 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-cni-bin\") pod \"04b20f02-6c1e-4082-8233-8f06bda63195\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887113 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-run-systemd\") pod \"04b20f02-6c1e-4082-8233-8f06bda63195\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887131 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-etc-openvswitch\") pod \"04b20f02-6c1e-4082-8233-8f06bda63195\" (UID: \"04b20f02-6c1e-4082-8233-8f06bda63195\") " Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887313 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a4febc2a-5af5-4acd-9521-527d275d2814-ovnkube-config\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887341 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04b20f02-6c1e-4082-8233-8f06bda63195-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "04b20f02-6c1e-4082-8233-8f06bda63195" (UID: "04b20f02-6c1e-4082-8233-8f06bda63195"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887353 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a4febc2a-5af5-4acd-9521-527d275d2814-ovn-node-metrics-cert\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887373 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "04b20f02-6c1e-4082-8233-8f06bda63195" (UID: "04b20f02-6c1e-4082-8233-8f06bda63195"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887377 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-host-run-ovn-kubernetes\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887399 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-var-lib-openvswitch\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887420 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-host-cni-netd\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887439 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-systemd-units\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887459 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a4febc2a-5af5-4acd-9521-527d275d2814-env-overrides\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887481 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-log-socket\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887502 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a4febc2a-5af5-4acd-9521-527d275d2814-ovnkube-script-lib\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887536 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-host-kubelet\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887571 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-run-ovn\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887614 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwd2j\" (UniqueName: \"kubernetes.io/projected/a4febc2a-5af5-4acd-9521-527d275d2814-kube-api-access-wwd2j\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887642 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-run-systemd\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887662 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-node-log\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887683 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-host-run-netns\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887680 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "04b20f02-6c1e-4082-8233-8f06bda63195" (UID: "04b20f02-6c1e-4082-8233-8f06bda63195"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887720 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-run-openvswitch\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887770 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-host-cni-bin\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887805 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887836 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-host-slash\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887854 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-etc-openvswitch\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887968 4714 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.887987 4714 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.888003 4714 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/04b20f02-6c1e-4082-8233-8f06bda63195-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.888007 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "04b20f02-6c1e-4082-8233-8f06bda63195" (UID: "04b20f02-6c1e-4082-8233-8f06bda63195"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.888020 4714 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-node-log\") on node \"crc\" DevicePath \"\"" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.888099 4714 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.888131 4714 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-log-socket\") on node \"crc\" DevicePath \"\"" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.888149 4714 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.888245 4714 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.888265 4714 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.888281 4714 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.888294 4714 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.888306 4714 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-slash\") on node \"crc\" DevicePath \"\"" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.888318 4714 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.888330 4714 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.888334 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04b20f02-6c1e-4082-8233-8f06bda63195-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "04b20f02-6c1e-4082-8233-8f06bda63195" (UID: "04b20f02-6c1e-4082-8233-8f06bda63195"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.888409 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04b20f02-6c1e-4082-8233-8f06bda63195-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "04b20f02-6c1e-4082-8233-8f06bda63195" (UID: "04b20f02-6c1e-4082-8233-8f06bda63195"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.892354 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b20f02-6c1e-4082-8233-8f06bda63195-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "04b20f02-6c1e-4082-8233-8f06bda63195" (UID: "04b20f02-6c1e-4082-8233-8f06bda63195"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.892540 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04b20f02-6c1e-4082-8233-8f06bda63195-kube-api-access-7vrsm" (OuterVolumeSpecName: "kube-api-access-7vrsm") pod "04b20f02-6c1e-4082-8233-8f06bda63195" (UID: "04b20f02-6c1e-4082-8233-8f06bda63195"). InnerVolumeSpecName "kube-api-access-7vrsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.902983 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b2ttm_89560008-8bdc-4640-af11-681d825e69d4/kube-multus/2.log" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.903468 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b2ttm_89560008-8bdc-4640-af11-681d825e69d4/kube-multus/1.log" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.903508 4714 generic.go:334] "Generic (PLEG): container finished" podID="89560008-8bdc-4640-af11-681d825e69d4" containerID="e21aab3b653d9b1f38d58e9c32cbfb8988660ecb96eec4099a6536e09747d8fb" exitCode=2 Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.903557 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b2ttm" event={"ID":"89560008-8bdc-4640-af11-681d825e69d4","Type":"ContainerDied","Data":"e21aab3b653d9b1f38d58e9c32cbfb8988660ecb96eec4099a6536e09747d8fb"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.903591 4714 scope.go:117] "RemoveContainer" containerID="c747ed61a18e27d63630395860ce896242426b1ce46ea5f9d00534b808804a58" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.904059 4714 scope.go:117] "RemoveContainer" containerID="e21aab3b653d9b1f38d58e9c32cbfb8988660ecb96eec4099a6536e09747d8fb" Jan 29 16:21:49 crc kubenswrapper[4714]: E0129 16:21:49.904346 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-b2ttm_openshift-multus(89560008-8bdc-4640-af11-681d825e69d4)\"" pod="openshift-multus/multus-b2ttm" podUID="89560008-8bdc-4640-af11-681d825e69d4" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.905830 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sbnkt_04b20f02-6c1e-4082-8233-8f06bda63195/ovnkube-controller/3.log" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.907365 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "04b20f02-6c1e-4082-8233-8f06bda63195" (UID: "04b20f02-6c1e-4082-8233-8f06bda63195"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.909293 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sbnkt_04b20f02-6c1e-4082-8233-8f06bda63195/ovn-acl-logging/0.log" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.909734 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sbnkt_04b20f02-6c1e-4082-8233-8f06bda63195/ovn-controller/0.log" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910104 4714 generic.go:334] "Generic (PLEG): container finished" podID="04b20f02-6c1e-4082-8233-8f06bda63195" containerID="00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6" exitCode=0 Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910176 4714 generic.go:334] "Generic (PLEG): container finished" podID="04b20f02-6c1e-4082-8233-8f06bda63195" containerID="662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf" exitCode=0 Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910175 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" event={"ID":"04b20f02-6c1e-4082-8233-8f06bda63195","Type":"ContainerDied","Data":"00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910195 4714 generic.go:334] "Generic (PLEG): container finished" podID="04b20f02-6c1e-4082-8233-8f06bda63195" containerID="5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7" exitCode=0 Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910208 4714 generic.go:334] "Generic (PLEG): container finished" podID="04b20f02-6c1e-4082-8233-8f06bda63195" containerID="e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4" exitCode=0 Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910217 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" event={"ID":"04b20f02-6c1e-4082-8233-8f06bda63195","Type":"ContainerDied","Data":"662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910235 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" event={"ID":"04b20f02-6c1e-4082-8233-8f06bda63195","Type":"ContainerDied","Data":"5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910247 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" event={"ID":"04b20f02-6c1e-4082-8233-8f06bda63195","Type":"ContainerDied","Data":"e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910258 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" event={"ID":"04b20f02-6c1e-4082-8233-8f06bda63195","Type":"ContainerDied","Data":"429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910219 4714 generic.go:334] "Generic (PLEG): container finished" podID="04b20f02-6c1e-4082-8233-8f06bda63195" containerID="429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816" exitCode=0 Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910271 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910277 4714 generic.go:334] "Generic (PLEG): container finished" podID="04b20f02-6c1e-4082-8233-8f06bda63195" containerID="b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc" exitCode=0 Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910366 4714 generic.go:334] "Generic (PLEG): container finished" podID="04b20f02-6c1e-4082-8233-8f06bda63195" containerID="2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd" exitCode=143 Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910375 4714 generic.go:334] "Generic (PLEG): container finished" podID="04b20f02-6c1e-4082-8233-8f06bda63195" containerID="7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016" exitCode=143 Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910286 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" event={"ID":"04b20f02-6c1e-4082-8233-8f06bda63195","Type":"ContainerDied","Data":"b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910408 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910422 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910431 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910438 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910446 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910453 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910460 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910467 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910474 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910481 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910499 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" event={"ID":"04b20f02-6c1e-4082-8233-8f06bda63195","Type":"ContainerDied","Data":"2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910513 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910522 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910529 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910536 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910543 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910551 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910558 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910566 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910573 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910580 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910590 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" event={"ID":"04b20f02-6c1e-4082-8233-8f06bda63195","Type":"ContainerDied","Data":"7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910601 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910609 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910616 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910623 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910630 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910637 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910644 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910651 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910659 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910665 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910677 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sbnkt" event={"ID":"04b20f02-6c1e-4082-8233-8f06bda63195","Type":"ContainerDied","Data":"17aad70fcdfcfc2aa07f37d1c4b0d894a800d6ca4c4b34e6100a73fad699fe31"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910695 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910714 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910725 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910733 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910740 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910747 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910754 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910761 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910768 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.910775 4714 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba"} Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.948622 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sbnkt"] Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.951672 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sbnkt"] Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.989629 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-host-kubelet\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.989692 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-run-ovn\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.989757 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwd2j\" (UniqueName: \"kubernetes.io/projected/a4febc2a-5af5-4acd-9521-527d275d2814-kube-api-access-wwd2j\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.989799 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-node-log\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.989793 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-host-kubelet\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.989819 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-run-systemd\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.989828 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-run-ovn\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.989848 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-host-run-netns\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.989900 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-host-run-netns\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.989906 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-run-systemd\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.989973 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-node-log\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990008 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-run-openvswitch\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990037 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-host-cni-bin\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990059 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990063 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-run-openvswitch\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990098 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-host-slash\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990113 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-host-cni-bin\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990120 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-etc-openvswitch\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990140 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-host-slash\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990097 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990153 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-etc-openvswitch\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990184 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a4febc2a-5af5-4acd-9521-527d275d2814-ovnkube-config\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990265 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a4febc2a-5af5-4acd-9521-527d275d2814-ovn-node-metrics-cert\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990302 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-host-run-ovn-kubernetes\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990327 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-var-lib-openvswitch\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990351 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-systemd-units\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990370 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-host-cni-netd\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990388 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a4febc2a-5af5-4acd-9521-527d275d2814-env-overrides\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990398 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-host-run-ovn-kubernetes\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990403 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-log-socket\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990442 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-var-lib-openvswitch\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990464 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-systemd-units\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990475 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a4febc2a-5af5-4acd-9521-527d275d2814-ovnkube-script-lib\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990490 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-host-cni-netd\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990427 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a4febc2a-5af5-4acd-9521-527d275d2814-log-socket\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990837 4714 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/04b20f02-6c1e-4082-8233-8f06bda63195-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990846 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a4febc2a-5af5-4acd-9521-527d275d2814-ovnkube-config\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990862 4714 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/04b20f02-6c1e-4082-8233-8f06bda63195-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990875 4714 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990887 4714 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/04b20f02-6c1e-4082-8233-8f06bda63195-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990902 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vrsm\" (UniqueName: \"kubernetes.io/projected/04b20f02-6c1e-4082-8233-8f06bda63195-kube-api-access-7vrsm\") on node \"crc\" DevicePath \"\"" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.990915 4714 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/04b20f02-6c1e-4082-8233-8f06bda63195-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.991251 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a4febc2a-5af5-4acd-9521-527d275d2814-env-overrides\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.991270 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a4febc2a-5af5-4acd-9521-527d275d2814-ovnkube-script-lib\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:49 crc kubenswrapper[4714]: I0129 16:21:49.994164 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a4febc2a-5af5-4acd-9521-527d275d2814-ovn-node-metrics-cert\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.010539 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwd2j\" (UniqueName: \"kubernetes.io/projected/a4febc2a-5af5-4acd-9521-527d275d2814-kube-api-access-wwd2j\") pod \"ovnkube-node-skzvq\" (UID: \"a4febc2a-5af5-4acd-9521-527d275d2814\") " pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.119102 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.199033 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04b20f02-6c1e-4082-8233-8f06bda63195" path="/var/lib/kubelet/pods/04b20f02-6c1e-4082-8233-8f06bda63195/volumes" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.211145 4714 scope.go:117] "RemoveContainer" containerID="00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.233391 4714 scope.go:117] "RemoveContainer" containerID="95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.249617 4714 scope.go:117] "RemoveContainer" containerID="662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.260608 4714 scope.go:117] "RemoveContainer" containerID="5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.274431 4714 scope.go:117] "RemoveContainer" containerID="e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.285249 4714 scope.go:117] "RemoveContainer" containerID="429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.302421 4714 scope.go:117] "RemoveContainer" containerID="b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.316396 4714 scope.go:117] "RemoveContainer" containerID="2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.334897 4714 scope.go:117] "RemoveContainer" containerID="7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.350519 4714 scope.go:117] "RemoveContainer" containerID="6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.422089 4714 scope.go:117] "RemoveContainer" containerID="00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6" Jan 29 16:21:50 crc kubenswrapper[4714]: E0129 16:21:50.422559 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6\": container with ID starting with 00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6 not found: ID does not exist" containerID="00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.422589 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6"} err="failed to get container status \"00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6\": rpc error: code = NotFound desc = could not find container \"00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6\": container with ID starting with 00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6 not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.422609 4714 scope.go:117] "RemoveContainer" containerID="95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365" Jan 29 16:21:50 crc kubenswrapper[4714]: E0129 16:21:50.423121 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365\": container with ID starting with 95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365 not found: ID does not exist" containerID="95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.423190 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365"} err="failed to get container status \"95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365\": rpc error: code = NotFound desc = could not find container \"95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365\": container with ID starting with 95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365 not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.423254 4714 scope.go:117] "RemoveContainer" containerID="662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf" Jan 29 16:21:50 crc kubenswrapper[4714]: E0129 16:21:50.423653 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\": container with ID starting with 662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf not found: ID does not exist" containerID="662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.423690 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf"} err="failed to get container status \"662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\": rpc error: code = NotFound desc = could not find container \"662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\": container with ID starting with 662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.423705 4714 scope.go:117] "RemoveContainer" containerID="5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7" Jan 29 16:21:50 crc kubenswrapper[4714]: E0129 16:21:50.424005 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\": container with ID starting with 5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7 not found: ID does not exist" containerID="5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.424039 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7"} err="failed to get container status \"5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\": rpc error: code = NotFound desc = could not find container \"5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\": container with ID starting with 5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7 not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.424101 4714 scope.go:117] "RemoveContainer" containerID="e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4" Jan 29 16:21:50 crc kubenswrapper[4714]: E0129 16:21:50.424353 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\": container with ID starting with e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4 not found: ID does not exist" containerID="e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.424379 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4"} err="failed to get container status \"e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\": rpc error: code = NotFound desc = could not find container \"e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\": container with ID starting with e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4 not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.424395 4714 scope.go:117] "RemoveContainer" containerID="429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816" Jan 29 16:21:50 crc kubenswrapper[4714]: E0129 16:21:50.424737 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\": container with ID starting with 429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816 not found: ID does not exist" containerID="429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.424767 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816"} err="failed to get container status \"429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\": rpc error: code = NotFound desc = could not find container \"429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\": container with ID starting with 429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816 not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.424785 4714 scope.go:117] "RemoveContainer" containerID="b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc" Jan 29 16:21:50 crc kubenswrapper[4714]: E0129 16:21:50.425055 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\": container with ID starting with b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc not found: ID does not exist" containerID="b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.425080 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc"} err="failed to get container status \"b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\": rpc error: code = NotFound desc = could not find container \"b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\": container with ID starting with b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.425093 4714 scope.go:117] "RemoveContainer" containerID="2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd" Jan 29 16:21:50 crc kubenswrapper[4714]: E0129 16:21:50.425346 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\": container with ID starting with 2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd not found: ID does not exist" containerID="2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.425379 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd"} err="failed to get container status \"2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\": rpc error: code = NotFound desc = could not find container \"2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\": container with ID starting with 2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.425432 4714 scope.go:117] "RemoveContainer" containerID="7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016" Jan 29 16:21:50 crc kubenswrapper[4714]: E0129 16:21:50.425652 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\": container with ID starting with 7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016 not found: ID does not exist" containerID="7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.425674 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016"} err="failed to get container status \"7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\": rpc error: code = NotFound desc = could not find container \"7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\": container with ID starting with 7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016 not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.425687 4714 scope.go:117] "RemoveContainer" containerID="6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba" Jan 29 16:21:50 crc kubenswrapper[4714]: E0129 16:21:50.425893 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\": container with ID starting with 6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba not found: ID does not exist" containerID="6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.425914 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba"} err="failed to get container status \"6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\": rpc error: code = NotFound desc = could not find container \"6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\": container with ID starting with 6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.425926 4714 scope.go:117] "RemoveContainer" containerID="00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.426158 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6"} err="failed to get container status \"00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6\": rpc error: code = NotFound desc = could not find container \"00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6\": container with ID starting with 00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6 not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.426183 4714 scope.go:117] "RemoveContainer" containerID="95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.426415 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365"} err="failed to get container status \"95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365\": rpc error: code = NotFound desc = could not find container \"95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365\": container with ID starting with 95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365 not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.426436 4714 scope.go:117] "RemoveContainer" containerID="662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.426611 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf"} err="failed to get container status \"662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\": rpc error: code = NotFound desc = could not find container \"662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\": container with ID starting with 662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.426629 4714 scope.go:117] "RemoveContainer" containerID="5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.426791 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7"} err="failed to get container status \"5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\": rpc error: code = NotFound desc = could not find container \"5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\": container with ID starting with 5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7 not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.426806 4714 scope.go:117] "RemoveContainer" containerID="e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.426975 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4"} err="failed to get container status \"e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\": rpc error: code = NotFound desc = could not find container \"e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\": container with ID starting with e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4 not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.426992 4714 scope.go:117] "RemoveContainer" containerID="429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.427171 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816"} err="failed to get container status \"429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\": rpc error: code = NotFound desc = could not find container \"429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\": container with ID starting with 429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816 not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.427187 4714 scope.go:117] "RemoveContainer" containerID="b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.428039 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc"} err="failed to get container status \"b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\": rpc error: code = NotFound desc = could not find container \"b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\": container with ID starting with b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.428059 4714 scope.go:117] "RemoveContainer" containerID="2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.428277 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd"} err="failed to get container status \"2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\": rpc error: code = NotFound desc = could not find container \"2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\": container with ID starting with 2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.428296 4714 scope.go:117] "RemoveContainer" containerID="7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.428456 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016"} err="failed to get container status \"7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\": rpc error: code = NotFound desc = could not find container \"7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\": container with ID starting with 7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016 not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.428474 4714 scope.go:117] "RemoveContainer" containerID="6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.428640 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba"} err="failed to get container status \"6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\": rpc error: code = NotFound desc = could not find container \"6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\": container with ID starting with 6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.428657 4714 scope.go:117] "RemoveContainer" containerID="00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.428810 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6"} err="failed to get container status \"00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6\": rpc error: code = NotFound desc = could not find container \"00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6\": container with ID starting with 00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6 not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.428826 4714 scope.go:117] "RemoveContainer" containerID="95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.429020 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365"} err="failed to get container status \"95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365\": rpc error: code = NotFound desc = could not find container \"95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365\": container with ID starting with 95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365 not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.429039 4714 scope.go:117] "RemoveContainer" containerID="662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.429373 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf"} err="failed to get container status \"662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\": rpc error: code = NotFound desc = could not find container \"662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\": container with ID starting with 662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.429390 4714 scope.go:117] "RemoveContainer" containerID="5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.429583 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7"} err="failed to get container status \"5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\": rpc error: code = NotFound desc = could not find container \"5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\": container with ID starting with 5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7 not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.429599 4714 scope.go:117] "RemoveContainer" containerID="e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.429829 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4"} err="failed to get container status \"e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\": rpc error: code = NotFound desc = could not find container \"e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\": container with ID starting with e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4 not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.429846 4714 scope.go:117] "RemoveContainer" containerID="429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.430029 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816"} err="failed to get container status \"429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\": rpc error: code = NotFound desc = could not find container \"429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\": container with ID starting with 429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816 not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.430046 4714 scope.go:117] "RemoveContainer" containerID="b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.430342 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc"} err="failed to get container status \"b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\": rpc error: code = NotFound desc = could not find container \"b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\": container with ID starting with b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.430368 4714 scope.go:117] "RemoveContainer" containerID="2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.431066 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd"} err="failed to get container status \"2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\": rpc error: code = NotFound desc = could not find container \"2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\": container with ID starting with 2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.431100 4714 scope.go:117] "RemoveContainer" containerID="7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.431483 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016"} err="failed to get container status \"7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\": rpc error: code = NotFound desc = could not find container \"7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\": container with ID starting with 7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016 not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.431535 4714 scope.go:117] "RemoveContainer" containerID="6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.431950 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba"} err="failed to get container status \"6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\": rpc error: code = NotFound desc = could not find container \"6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\": container with ID starting with 6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.432003 4714 scope.go:117] "RemoveContainer" containerID="00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.432382 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6"} err="failed to get container status \"00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6\": rpc error: code = NotFound desc = could not find container \"00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6\": container with ID starting with 00a2229f59557dd718e8e44cd8806fa686a96164db467c9fd584df1ca5f949c6 not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.432408 4714 scope.go:117] "RemoveContainer" containerID="95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.432727 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365"} err="failed to get container status \"95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365\": rpc error: code = NotFound desc = could not find container \"95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365\": container with ID starting with 95506ff95d5b470923bf3c4615b45a7b4741260bb32a99839052875ecf50a365 not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.432760 4714 scope.go:117] "RemoveContainer" containerID="662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.433086 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf"} err="failed to get container status \"662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\": rpc error: code = NotFound desc = could not find container \"662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf\": container with ID starting with 662852ce553d1fb58e5b7f129508aba9fa239b05fb7a0102edfe307908a49bdf not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.433112 4714 scope.go:117] "RemoveContainer" containerID="5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.433374 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7"} err="failed to get container status \"5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\": rpc error: code = NotFound desc = could not find container \"5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7\": container with ID starting with 5187dc8548f605295fbf37ce78f2d5a40a3ba0996c4d631fe6df1583db419df7 not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.433411 4714 scope.go:117] "RemoveContainer" containerID="e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.434149 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4"} err="failed to get container status \"e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\": rpc error: code = NotFound desc = could not find container \"e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4\": container with ID starting with e625f30f0174bd89b2624c71963ed72e74cb0c7ad78f18998ec81ea57690fde4 not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.434178 4714 scope.go:117] "RemoveContainer" containerID="429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.434434 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816"} err="failed to get container status \"429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\": rpc error: code = NotFound desc = could not find container \"429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816\": container with ID starting with 429e69f5ff2994413c83898a548cfdc9d11bd6a498838e0bc2fe8813f2800816 not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.434462 4714 scope.go:117] "RemoveContainer" containerID="b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.434703 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc"} err="failed to get container status \"b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\": rpc error: code = NotFound desc = could not find container \"b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc\": container with ID starting with b1d444ab76cebea1c9d94e3caae9a82ce1a3f9b5b98eed31548472a8f8f6c4bc not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.434749 4714 scope.go:117] "RemoveContainer" containerID="2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.435052 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd"} err="failed to get container status \"2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\": rpc error: code = NotFound desc = could not find container \"2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd\": container with ID starting with 2a5f6317ab204866e3d4a05f50396b0b2786b0e1d922d255c3a2b53c7e6968fd not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.435080 4714 scope.go:117] "RemoveContainer" containerID="7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.435324 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016"} err="failed to get container status \"7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\": rpc error: code = NotFound desc = could not find container \"7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016\": container with ID starting with 7d00f0f27a2d4d8a5e65608561bbad3e2d2017afa341e78c2914e2061f983016 not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.435348 4714 scope.go:117] "RemoveContainer" containerID="6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.435582 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba"} err="failed to get container status \"6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\": rpc error: code = NotFound desc = could not find container \"6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba\": container with ID starting with 6025ce34465bd94b02e72f82b958993a2aa8c0ed695f7b6c70f8dd4475ec67ba not found: ID does not exist" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.916920 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b2ttm_89560008-8bdc-4640-af11-681d825e69d4/kube-multus/2.log" Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.919179 4714 generic.go:334] "Generic (PLEG): container finished" podID="a4febc2a-5af5-4acd-9521-527d275d2814" containerID="041701549e9197314050e0918afcd67cc692ec700b123e479cc8b7d528218f5a" exitCode=0 Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.919281 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" event={"ID":"a4febc2a-5af5-4acd-9521-527d275d2814","Type":"ContainerDied","Data":"041701549e9197314050e0918afcd67cc692ec700b123e479cc8b7d528218f5a"} Jan 29 16:21:50 crc kubenswrapper[4714]: I0129 16:21:50.919334 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" event={"ID":"a4febc2a-5af5-4acd-9521-527d275d2814","Type":"ContainerStarted","Data":"a4e87bf80e9415e77e846fb0756debf13b4e453e2713dec1043b3d31093ccf48"} Jan 29 16:21:51 crc kubenswrapper[4714]: I0129 16:21:51.932250 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" event={"ID":"a4febc2a-5af5-4acd-9521-527d275d2814","Type":"ContainerStarted","Data":"b1478134ba37f7e9fe2f8f1c3a7614a4176990c9d63d024ea6a6366a9ffd5ef4"} Jan 29 16:21:51 crc kubenswrapper[4714]: I0129 16:21:51.932685 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" event={"ID":"a4febc2a-5af5-4acd-9521-527d275d2814","Type":"ContainerStarted","Data":"cf40f584012c0ab702c5121a367f19e69f14be7138e56a223b208622c970c7fe"} Jan 29 16:21:51 crc kubenswrapper[4714]: I0129 16:21:51.932711 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" event={"ID":"a4febc2a-5af5-4acd-9521-527d275d2814","Type":"ContainerStarted","Data":"8f203e1c5feaedd818df2c932055b9ca8e04a4e3d2a22c8101529b4b66c37e5e"} Jan 29 16:21:52 crc kubenswrapper[4714]: I0129 16:21:52.942435 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" event={"ID":"a4febc2a-5af5-4acd-9521-527d275d2814","Type":"ContainerStarted","Data":"5e8ee4e72c5e7be52b0a982526799ee625e6153e19584ad2162494477e76aceb"} Jan 29 16:21:52 crc kubenswrapper[4714]: I0129 16:21:52.942500 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" event={"ID":"a4febc2a-5af5-4acd-9521-527d275d2814","Type":"ContainerStarted","Data":"81152bf9ad2c03871e447d73002e05950a372535404748d0a5554458e8dbbc0a"} Jan 29 16:21:52 crc kubenswrapper[4714]: I0129 16:21:52.942518 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" event={"ID":"a4febc2a-5af5-4acd-9521-527d275d2814","Type":"ContainerStarted","Data":"d3c6c9a6cbe7093656147cf42544feb7f3f26a5e115340d9fb9c9d94206090ae"} Jan 29 16:21:54 crc kubenswrapper[4714]: I0129 16:21:54.960213 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" event={"ID":"a4febc2a-5af5-4acd-9521-527d275d2814","Type":"ContainerStarted","Data":"aa8ac0cdf264663cc0ef0807cffc7f26644661fa11d01105a502295dcd648ca6"} Jan 29 16:21:56 crc kubenswrapper[4714]: I0129 16:21:56.974368 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" event={"ID":"a4febc2a-5af5-4acd-9521-527d275d2814","Type":"ContainerStarted","Data":"01a66910506befdef6b42002992c445aa79f1addfe47a86c32f58e266f3df838"} Jan 29 16:21:56 crc kubenswrapper[4714]: I0129 16:21:56.975008 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:56 crc kubenswrapper[4714]: I0129 16:21:56.975025 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:57 crc kubenswrapper[4714]: I0129 16:21:57.005804 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:57 crc kubenswrapper[4714]: I0129 16:21:57.011029 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" podStartSLOduration=8.011013825 podStartE2EDuration="8.011013825s" podCreationTimestamp="2026-01-29 16:21:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:21:57.007482218 +0000 UTC m=+723.527983338" watchObservedRunningTime="2026-01-29 16:21:57.011013825 +0000 UTC m=+723.531514935" Jan 29 16:21:57 crc kubenswrapper[4714]: I0129 16:21:57.981868 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:21:58 crc kubenswrapper[4714]: I0129 16:21:58.014155 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:22:03 crc kubenswrapper[4714]: I0129 16:22:03.184106 4714 scope.go:117] "RemoveContainer" containerID="e21aab3b653d9b1f38d58e9c32cbfb8988660ecb96eec4099a6536e09747d8fb" Jan 29 16:22:03 crc kubenswrapper[4714]: E0129 16:22:03.184889 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-b2ttm_openshift-multus(89560008-8bdc-4640-af11-681d825e69d4)\"" pod="openshift-multus/multus-b2ttm" podUID="89560008-8bdc-4640-af11-681d825e69d4" Jan 29 16:22:12 crc kubenswrapper[4714]: I0129 16:22:12.001551 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st"] Jan 29 16:22:12 crc kubenswrapper[4714]: I0129 16:22:12.004106 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" Jan 29 16:22:12 crc kubenswrapper[4714]: I0129 16:22:12.006827 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 16:22:12 crc kubenswrapper[4714]: I0129 16:22:12.024323 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st"] Jan 29 16:22:12 crc kubenswrapper[4714]: I0129 16:22:12.087789 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c184c6f2-1af5-4f70-9251-6beb2baae06b-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st\" (UID: \"c184c6f2-1af5-4f70-9251-6beb2baae06b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" Jan 29 16:22:12 crc kubenswrapper[4714]: I0129 16:22:12.087873 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjp7x\" (UniqueName: \"kubernetes.io/projected/c184c6f2-1af5-4f70-9251-6beb2baae06b-kube-api-access-zjp7x\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st\" (UID: \"c184c6f2-1af5-4f70-9251-6beb2baae06b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" Jan 29 16:22:12 crc kubenswrapper[4714]: I0129 16:22:12.088044 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c184c6f2-1af5-4f70-9251-6beb2baae06b-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st\" (UID: \"c184c6f2-1af5-4f70-9251-6beb2baae06b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" Jan 29 16:22:12 crc kubenswrapper[4714]: I0129 16:22:12.188874 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c184c6f2-1af5-4f70-9251-6beb2baae06b-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st\" (UID: \"c184c6f2-1af5-4f70-9251-6beb2baae06b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" Jan 29 16:22:12 crc kubenswrapper[4714]: I0129 16:22:12.189111 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c184c6f2-1af5-4f70-9251-6beb2baae06b-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st\" (UID: \"c184c6f2-1af5-4f70-9251-6beb2baae06b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" Jan 29 16:22:12 crc kubenswrapper[4714]: I0129 16:22:12.189193 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjp7x\" (UniqueName: \"kubernetes.io/projected/c184c6f2-1af5-4f70-9251-6beb2baae06b-kube-api-access-zjp7x\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st\" (UID: \"c184c6f2-1af5-4f70-9251-6beb2baae06b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" Jan 29 16:22:12 crc kubenswrapper[4714]: I0129 16:22:12.189859 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c184c6f2-1af5-4f70-9251-6beb2baae06b-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st\" (UID: \"c184c6f2-1af5-4f70-9251-6beb2baae06b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" Jan 29 16:22:12 crc kubenswrapper[4714]: I0129 16:22:12.189994 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c184c6f2-1af5-4f70-9251-6beb2baae06b-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st\" (UID: \"c184c6f2-1af5-4f70-9251-6beb2baae06b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" Jan 29 16:22:12 crc kubenswrapper[4714]: I0129 16:22:12.224993 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjp7x\" (UniqueName: \"kubernetes.io/projected/c184c6f2-1af5-4f70-9251-6beb2baae06b-kube-api-access-zjp7x\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st\" (UID: \"c184c6f2-1af5-4f70-9251-6beb2baae06b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" Jan 29 16:22:12 crc kubenswrapper[4714]: I0129 16:22:12.338206 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" Jan 29 16:22:12 crc kubenswrapper[4714]: E0129 16:22:12.374710 4714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st_openshift-marketplace_c184c6f2-1af5-4f70-9251-6beb2baae06b_0(5956f26abe33ed8221a24b1daa95878a91ce67788da634e2a0ca4f98e920f958): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 16:22:12 crc kubenswrapper[4714]: E0129 16:22:12.374802 4714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st_openshift-marketplace_c184c6f2-1af5-4f70-9251-6beb2baae06b_0(5956f26abe33ed8221a24b1daa95878a91ce67788da634e2a0ca4f98e920f958): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" Jan 29 16:22:12 crc kubenswrapper[4714]: E0129 16:22:12.374829 4714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st_openshift-marketplace_c184c6f2-1af5-4f70-9251-6beb2baae06b_0(5956f26abe33ed8221a24b1daa95878a91ce67788da634e2a0ca4f98e920f958): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" Jan 29 16:22:12 crc kubenswrapper[4714]: E0129 16:22:12.374893 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st_openshift-marketplace(c184c6f2-1af5-4f70-9251-6beb2baae06b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st_openshift-marketplace(c184c6f2-1af5-4f70-9251-6beb2baae06b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st_openshift-marketplace_c184c6f2-1af5-4f70-9251-6beb2baae06b_0(5956f26abe33ed8221a24b1daa95878a91ce67788da634e2a0ca4f98e920f958): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" podUID="c184c6f2-1af5-4f70-9251-6beb2baae06b" Jan 29 16:22:13 crc kubenswrapper[4714]: I0129 16:22:13.092248 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" Jan 29 16:22:13 crc kubenswrapper[4714]: I0129 16:22:13.092794 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" Jan 29 16:22:13 crc kubenswrapper[4714]: E0129 16:22:13.112739 4714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st_openshift-marketplace_c184c6f2-1af5-4f70-9251-6beb2baae06b_0(30e6d6bea70f12f3450627a5bc192b45822df142fc1968ef1a5441b4b22377b0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 16:22:13 crc kubenswrapper[4714]: E0129 16:22:13.112848 4714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st_openshift-marketplace_c184c6f2-1af5-4f70-9251-6beb2baae06b_0(30e6d6bea70f12f3450627a5bc192b45822df142fc1968ef1a5441b4b22377b0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" Jan 29 16:22:13 crc kubenswrapper[4714]: E0129 16:22:13.112888 4714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st_openshift-marketplace_c184c6f2-1af5-4f70-9251-6beb2baae06b_0(30e6d6bea70f12f3450627a5bc192b45822df142fc1968ef1a5441b4b22377b0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" Jan 29 16:22:13 crc kubenswrapper[4714]: E0129 16:22:13.112974 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st_openshift-marketplace(c184c6f2-1af5-4f70-9251-6beb2baae06b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st_openshift-marketplace(c184c6f2-1af5-4f70-9251-6beb2baae06b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st_openshift-marketplace_c184c6f2-1af5-4f70-9251-6beb2baae06b_0(30e6d6bea70f12f3450627a5bc192b45822df142fc1968ef1a5441b4b22377b0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" podUID="c184c6f2-1af5-4f70-9251-6beb2baae06b" Jan 29 16:22:15 crc kubenswrapper[4714]: I0129 16:22:15.185153 4714 scope.go:117] "RemoveContainer" containerID="e21aab3b653d9b1f38d58e9c32cbfb8988660ecb96eec4099a6536e09747d8fb" Jan 29 16:22:16 crc kubenswrapper[4714]: I0129 16:22:16.136726 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b2ttm_89560008-8bdc-4640-af11-681d825e69d4/kube-multus/2.log" Jan 29 16:22:16 crc kubenswrapper[4714]: I0129 16:22:16.137327 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b2ttm" event={"ID":"89560008-8bdc-4640-af11-681d825e69d4","Type":"ContainerStarted","Data":"4caf97375d8de0e64eae8f7542cedd216d5913d276a7c525626882159c3c130b"} Jan 29 16:22:20 crc kubenswrapper[4714]: I0129 16:22:20.149974 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-skzvq" Jan 29 16:22:23 crc kubenswrapper[4714]: I0129 16:22:23.183611 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" Jan 29 16:22:23 crc kubenswrapper[4714]: I0129 16:22:23.185147 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" Jan 29 16:22:23 crc kubenswrapper[4714]: I0129 16:22:23.389978 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st"] Jan 29 16:22:24 crc kubenswrapper[4714]: I0129 16:22:24.190894 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" event={"ID":"c184c6f2-1af5-4f70-9251-6beb2baae06b","Type":"ContainerStarted","Data":"869accf0303863a05a34ac3c3845988fadd47bff208f4ed12415981d69ec342a"} Jan 29 16:22:26 crc kubenswrapper[4714]: I0129 16:22:26.203923 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" event={"ID":"c184c6f2-1af5-4f70-9251-6beb2baae06b","Type":"ContainerStarted","Data":"05e0a9023a315c8e823d2a29c8e0d2e37a13cd8fbf2c97a5e88c1825840657f8"} Jan 29 16:22:27 crc kubenswrapper[4714]: I0129 16:22:27.211661 4714 generic.go:334] "Generic (PLEG): container finished" podID="c184c6f2-1af5-4f70-9251-6beb2baae06b" containerID="05e0a9023a315c8e823d2a29c8e0d2e37a13cd8fbf2c97a5e88c1825840657f8" exitCode=0 Jan 29 16:22:27 crc kubenswrapper[4714]: I0129 16:22:27.211821 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" event={"ID":"c184c6f2-1af5-4f70-9251-6beb2baae06b","Type":"ContainerDied","Data":"05e0a9023a315c8e823d2a29c8e0d2e37a13cd8fbf2c97a5e88c1825840657f8"} Jan 29 16:22:27 crc kubenswrapper[4714]: I0129 16:22:27.213985 4714 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:22:32 crc kubenswrapper[4714]: I0129 16:22:32.239871 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" event={"ID":"c184c6f2-1af5-4f70-9251-6beb2baae06b","Type":"ContainerStarted","Data":"fb8c01e794f29fc68495ebe7c0293de3d5b3be2b85ddbfd0e228c600da4a1d06"} Jan 29 16:22:32 crc kubenswrapper[4714]: I0129 16:22:32.821700 4714 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 16:22:33 crc kubenswrapper[4714]: I0129 16:22:33.246440 4714 generic.go:334] "Generic (PLEG): container finished" podID="c184c6f2-1af5-4f70-9251-6beb2baae06b" containerID="fb8c01e794f29fc68495ebe7c0293de3d5b3be2b85ddbfd0e228c600da4a1d06" exitCode=0 Jan 29 16:22:33 crc kubenswrapper[4714]: I0129 16:22:33.246496 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" event={"ID":"c184c6f2-1af5-4f70-9251-6beb2baae06b","Type":"ContainerDied","Data":"fb8c01e794f29fc68495ebe7c0293de3d5b3be2b85ddbfd0e228c600da4a1d06"} Jan 29 16:22:33 crc kubenswrapper[4714]: I0129 16:22:33.805243 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9ttgj"] Jan 29 16:22:33 crc kubenswrapper[4714]: I0129 16:22:33.807505 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ttgj" Jan 29 16:22:33 crc kubenswrapper[4714]: I0129 16:22:33.826549 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9ttgj"] Jan 29 16:22:33 crc kubenswrapper[4714]: I0129 16:22:33.902497 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b93b136-c182-4b20-89a4-1d61e1d2d03c-utilities\") pod \"redhat-operators-9ttgj\" (UID: \"3b93b136-c182-4b20-89a4-1d61e1d2d03c\") " pod="openshift-marketplace/redhat-operators-9ttgj" Jan 29 16:22:33 crc kubenswrapper[4714]: I0129 16:22:33.902651 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkw5r\" (UniqueName: \"kubernetes.io/projected/3b93b136-c182-4b20-89a4-1d61e1d2d03c-kube-api-access-xkw5r\") pod \"redhat-operators-9ttgj\" (UID: \"3b93b136-c182-4b20-89a4-1d61e1d2d03c\") " pod="openshift-marketplace/redhat-operators-9ttgj" Jan 29 16:22:33 crc kubenswrapper[4714]: I0129 16:22:33.902719 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b93b136-c182-4b20-89a4-1d61e1d2d03c-catalog-content\") pod \"redhat-operators-9ttgj\" (UID: \"3b93b136-c182-4b20-89a4-1d61e1d2d03c\") " pod="openshift-marketplace/redhat-operators-9ttgj" Jan 29 16:22:34 crc kubenswrapper[4714]: I0129 16:22:34.004209 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b93b136-c182-4b20-89a4-1d61e1d2d03c-utilities\") pod \"redhat-operators-9ttgj\" (UID: \"3b93b136-c182-4b20-89a4-1d61e1d2d03c\") " pod="openshift-marketplace/redhat-operators-9ttgj" Jan 29 16:22:34 crc kubenswrapper[4714]: I0129 16:22:34.004380 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkw5r\" (UniqueName: \"kubernetes.io/projected/3b93b136-c182-4b20-89a4-1d61e1d2d03c-kube-api-access-xkw5r\") pod \"redhat-operators-9ttgj\" (UID: \"3b93b136-c182-4b20-89a4-1d61e1d2d03c\") " pod="openshift-marketplace/redhat-operators-9ttgj" Jan 29 16:22:34 crc kubenswrapper[4714]: I0129 16:22:34.004443 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b93b136-c182-4b20-89a4-1d61e1d2d03c-catalog-content\") pod \"redhat-operators-9ttgj\" (UID: \"3b93b136-c182-4b20-89a4-1d61e1d2d03c\") " pod="openshift-marketplace/redhat-operators-9ttgj" Jan 29 16:22:34 crc kubenswrapper[4714]: I0129 16:22:34.004900 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b93b136-c182-4b20-89a4-1d61e1d2d03c-utilities\") pod \"redhat-operators-9ttgj\" (UID: \"3b93b136-c182-4b20-89a4-1d61e1d2d03c\") " pod="openshift-marketplace/redhat-operators-9ttgj" Jan 29 16:22:34 crc kubenswrapper[4714]: I0129 16:22:34.005072 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b93b136-c182-4b20-89a4-1d61e1d2d03c-catalog-content\") pod \"redhat-operators-9ttgj\" (UID: \"3b93b136-c182-4b20-89a4-1d61e1d2d03c\") " pod="openshift-marketplace/redhat-operators-9ttgj" Jan 29 16:22:34 crc kubenswrapper[4714]: I0129 16:22:34.021899 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkw5r\" (UniqueName: \"kubernetes.io/projected/3b93b136-c182-4b20-89a4-1d61e1d2d03c-kube-api-access-xkw5r\") pod \"redhat-operators-9ttgj\" (UID: \"3b93b136-c182-4b20-89a4-1d61e1d2d03c\") " pod="openshift-marketplace/redhat-operators-9ttgj" Jan 29 16:22:34 crc kubenswrapper[4714]: I0129 16:22:34.145221 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ttgj" Jan 29 16:22:34 crc kubenswrapper[4714]: I0129 16:22:34.254025 4714 generic.go:334] "Generic (PLEG): container finished" podID="c184c6f2-1af5-4f70-9251-6beb2baae06b" containerID="c3c80ba39f2166deea58a208aff618c1c394a583efafcb8a7f94ec8b247ab86e" exitCode=0 Jan 29 16:22:34 crc kubenswrapper[4714]: I0129 16:22:34.254081 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" event={"ID":"c184c6f2-1af5-4f70-9251-6beb2baae06b","Type":"ContainerDied","Data":"c3c80ba39f2166deea58a208aff618c1c394a583efafcb8a7f94ec8b247ab86e"} Jan 29 16:22:34 crc kubenswrapper[4714]: I0129 16:22:34.430840 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9ttgj"] Jan 29 16:22:35 crc kubenswrapper[4714]: I0129 16:22:35.260958 4714 generic.go:334] "Generic (PLEG): container finished" podID="3b93b136-c182-4b20-89a4-1d61e1d2d03c" containerID="dc5891a8c29afb2826ce230ab3fd28a92cd17c413c6d5d58f4c0d944c000e9ef" exitCode=0 Jan 29 16:22:35 crc kubenswrapper[4714]: I0129 16:22:35.261031 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ttgj" event={"ID":"3b93b136-c182-4b20-89a4-1d61e1d2d03c","Type":"ContainerDied","Data":"dc5891a8c29afb2826ce230ab3fd28a92cd17c413c6d5d58f4c0d944c000e9ef"} Jan 29 16:22:35 crc kubenswrapper[4714]: I0129 16:22:35.261319 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ttgj" event={"ID":"3b93b136-c182-4b20-89a4-1d61e1d2d03c","Type":"ContainerStarted","Data":"4716263f07ca6d3ad4b5be1c8fb4857b1c649402f03ef208b20007f3c151a261"} Jan 29 16:22:35 crc kubenswrapper[4714]: I0129 16:22:35.550120 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" Jan 29 16:22:35 crc kubenswrapper[4714]: I0129 16:22:35.724979 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjp7x\" (UniqueName: \"kubernetes.io/projected/c184c6f2-1af5-4f70-9251-6beb2baae06b-kube-api-access-zjp7x\") pod \"c184c6f2-1af5-4f70-9251-6beb2baae06b\" (UID: \"c184c6f2-1af5-4f70-9251-6beb2baae06b\") " Jan 29 16:22:35 crc kubenswrapper[4714]: I0129 16:22:35.725059 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c184c6f2-1af5-4f70-9251-6beb2baae06b-bundle\") pod \"c184c6f2-1af5-4f70-9251-6beb2baae06b\" (UID: \"c184c6f2-1af5-4f70-9251-6beb2baae06b\") " Jan 29 16:22:35 crc kubenswrapper[4714]: I0129 16:22:35.725150 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c184c6f2-1af5-4f70-9251-6beb2baae06b-util\") pod \"c184c6f2-1af5-4f70-9251-6beb2baae06b\" (UID: \"c184c6f2-1af5-4f70-9251-6beb2baae06b\") " Jan 29 16:22:35 crc kubenswrapper[4714]: I0129 16:22:35.726128 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c184c6f2-1af5-4f70-9251-6beb2baae06b-bundle" (OuterVolumeSpecName: "bundle") pod "c184c6f2-1af5-4f70-9251-6beb2baae06b" (UID: "c184c6f2-1af5-4f70-9251-6beb2baae06b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:22:35 crc kubenswrapper[4714]: I0129 16:22:35.734103 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c184c6f2-1af5-4f70-9251-6beb2baae06b-kube-api-access-zjp7x" (OuterVolumeSpecName: "kube-api-access-zjp7x") pod "c184c6f2-1af5-4f70-9251-6beb2baae06b" (UID: "c184c6f2-1af5-4f70-9251-6beb2baae06b"). InnerVolumeSpecName "kube-api-access-zjp7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:35 crc kubenswrapper[4714]: I0129 16:22:35.738948 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c184c6f2-1af5-4f70-9251-6beb2baae06b-util" (OuterVolumeSpecName: "util") pod "c184c6f2-1af5-4f70-9251-6beb2baae06b" (UID: "c184c6f2-1af5-4f70-9251-6beb2baae06b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:22:35 crc kubenswrapper[4714]: I0129 16:22:35.827189 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjp7x\" (UniqueName: \"kubernetes.io/projected/c184c6f2-1af5-4f70-9251-6beb2baae06b-kube-api-access-zjp7x\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:35 crc kubenswrapper[4714]: I0129 16:22:35.827249 4714 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c184c6f2-1af5-4f70-9251-6beb2baae06b-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:35 crc kubenswrapper[4714]: I0129 16:22:35.827274 4714 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c184c6f2-1af5-4f70-9251-6beb2baae06b-util\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:36 crc kubenswrapper[4714]: I0129 16:22:36.279873 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" event={"ID":"c184c6f2-1af5-4f70-9251-6beb2baae06b","Type":"ContainerDied","Data":"869accf0303863a05a34ac3c3845988fadd47bff208f4ed12415981d69ec342a"} Jan 29 16:22:36 crc kubenswrapper[4714]: I0129 16:22:36.279957 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="869accf0303863a05a34ac3c3845988fadd47bff208f4ed12415981d69ec342a" Jan 29 16:22:36 crc kubenswrapper[4714]: I0129 16:22:36.280186 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st" Jan 29 16:22:38 crc kubenswrapper[4714]: I0129 16:22:38.294188 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ttgj" event={"ID":"3b93b136-c182-4b20-89a4-1d61e1d2d03c","Type":"ContainerStarted","Data":"6632448df0e09a8f0e5148074bcd76d6b08015e4e356eeae9a581a934dfb3cac"} Jan 29 16:22:39 crc kubenswrapper[4714]: I0129 16:22:39.302108 4714 generic.go:334] "Generic (PLEG): container finished" podID="3b93b136-c182-4b20-89a4-1d61e1d2d03c" containerID="6632448df0e09a8f0e5148074bcd76d6b08015e4e356eeae9a581a934dfb3cac" exitCode=0 Jan 29 16:22:39 crc kubenswrapper[4714]: I0129 16:22:39.302225 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ttgj" event={"ID":"3b93b136-c182-4b20-89a4-1d61e1d2d03c","Type":"ContainerDied","Data":"6632448df0e09a8f0e5148074bcd76d6b08015e4e356eeae9a581a934dfb3cac"} Jan 29 16:22:42 crc kubenswrapper[4714]: I0129 16:22:42.319363 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ttgj" event={"ID":"3b93b136-c182-4b20-89a4-1d61e1d2d03c","Type":"ContainerStarted","Data":"71891b59c66ba86ccba3be13ccefa3eee4b9a927ebc16ecd2fd3f3077bf4cab2"} Jan 29 16:22:42 crc kubenswrapper[4714]: I0129 16:22:42.340139 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9ttgj" podStartSLOduration=3.49502799 podStartE2EDuration="9.340121388s" podCreationTimestamp="2026-01-29 16:22:33 +0000 UTC" firstStartedPulling="2026-01-29 16:22:35.26318497 +0000 UTC m=+761.783686090" lastFinishedPulling="2026-01-29 16:22:41.108278348 +0000 UTC m=+767.628779488" observedRunningTime="2026-01-29 16:22:42.336464397 +0000 UTC m=+768.856965517" watchObservedRunningTime="2026-01-29 16:22:42.340121388 +0000 UTC m=+768.860622508" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.543991 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-586b87b897-zpr4q"] Jan 29 16:22:43 crc kubenswrapper[4714]: E0129 16:22:43.544197 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c184c6f2-1af5-4f70-9251-6beb2baae06b" containerName="extract" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.544209 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="c184c6f2-1af5-4f70-9251-6beb2baae06b" containerName="extract" Jan 29 16:22:43 crc kubenswrapper[4714]: E0129 16:22:43.544218 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c184c6f2-1af5-4f70-9251-6beb2baae06b" containerName="util" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.544224 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="c184c6f2-1af5-4f70-9251-6beb2baae06b" containerName="util" Jan 29 16:22:43 crc kubenswrapper[4714]: E0129 16:22:43.544242 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c184c6f2-1af5-4f70-9251-6beb2baae06b" containerName="pull" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.544247 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="c184c6f2-1af5-4f70-9251-6beb2baae06b" containerName="pull" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.544333 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="c184c6f2-1af5-4f70-9251-6beb2baae06b" containerName="extract" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.544712 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-586b87b897-zpr4q" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.547818 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.548632 4714 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-d6tsl" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.548781 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.548880 4714 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.549666 4714 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.568481 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-586b87b897-zpr4q"] Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.720026 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/432a4f98-877c-4f7a-b2b0-ce273a77450a-apiservice-cert\") pod \"metallb-operator-controller-manager-586b87b897-zpr4q\" (UID: \"432a4f98-877c-4f7a-b2b0-ce273a77450a\") " pod="metallb-system/metallb-operator-controller-manager-586b87b897-zpr4q" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.720402 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/432a4f98-877c-4f7a-b2b0-ce273a77450a-webhook-cert\") pod \"metallb-operator-controller-manager-586b87b897-zpr4q\" (UID: \"432a4f98-877c-4f7a-b2b0-ce273a77450a\") " pod="metallb-system/metallb-operator-controller-manager-586b87b897-zpr4q" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.720444 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9llmx\" (UniqueName: \"kubernetes.io/projected/432a4f98-877c-4f7a-b2b0-ce273a77450a-kube-api-access-9llmx\") pod \"metallb-operator-controller-manager-586b87b897-zpr4q\" (UID: \"432a4f98-877c-4f7a-b2b0-ce273a77450a\") " pod="metallb-system/metallb-operator-controller-manager-586b87b897-zpr4q" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.777828 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7df7c8d444-xs67n"] Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.778516 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7df7c8d444-xs67n" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.781291 4714 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-r42vn" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.781314 4714 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.781451 4714 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.821353 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/432a4f98-877c-4f7a-b2b0-ce273a77450a-apiservice-cert\") pod \"metallb-operator-controller-manager-586b87b897-zpr4q\" (UID: \"432a4f98-877c-4f7a-b2b0-ce273a77450a\") " pod="metallb-system/metallb-operator-controller-manager-586b87b897-zpr4q" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.821634 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/432a4f98-877c-4f7a-b2b0-ce273a77450a-webhook-cert\") pod \"metallb-operator-controller-manager-586b87b897-zpr4q\" (UID: \"432a4f98-877c-4f7a-b2b0-ce273a77450a\") " pod="metallb-system/metallb-operator-controller-manager-586b87b897-zpr4q" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.821731 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9llmx\" (UniqueName: \"kubernetes.io/projected/432a4f98-877c-4f7a-b2b0-ce273a77450a-kube-api-access-9llmx\") pod \"metallb-operator-controller-manager-586b87b897-zpr4q\" (UID: \"432a4f98-877c-4f7a-b2b0-ce273a77450a\") " pod="metallb-system/metallb-operator-controller-manager-586b87b897-zpr4q" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.826735 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/432a4f98-877c-4f7a-b2b0-ce273a77450a-apiservice-cert\") pod \"metallb-operator-controller-manager-586b87b897-zpr4q\" (UID: \"432a4f98-877c-4f7a-b2b0-ce273a77450a\") " pod="metallb-system/metallb-operator-controller-manager-586b87b897-zpr4q" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.830799 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/432a4f98-877c-4f7a-b2b0-ce273a77450a-webhook-cert\") pod \"metallb-operator-controller-manager-586b87b897-zpr4q\" (UID: \"432a4f98-877c-4f7a-b2b0-ce273a77450a\") " pod="metallb-system/metallb-operator-controller-manager-586b87b897-zpr4q" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.837673 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7df7c8d444-xs67n"] Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.849534 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9llmx\" (UniqueName: \"kubernetes.io/projected/432a4f98-877c-4f7a-b2b0-ce273a77450a-kube-api-access-9llmx\") pod \"metallb-operator-controller-manager-586b87b897-zpr4q\" (UID: \"432a4f98-877c-4f7a-b2b0-ce273a77450a\") " pod="metallb-system/metallb-operator-controller-manager-586b87b897-zpr4q" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.859831 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-586b87b897-zpr4q" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.923190 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ffe179b8-a1c8-430b-94f5-920aacf0defe-webhook-cert\") pod \"metallb-operator-webhook-server-7df7c8d444-xs67n\" (UID: \"ffe179b8-a1c8-430b-94f5-920aacf0defe\") " pod="metallb-system/metallb-operator-webhook-server-7df7c8d444-xs67n" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.923327 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q99dn\" (UniqueName: \"kubernetes.io/projected/ffe179b8-a1c8-430b-94f5-920aacf0defe-kube-api-access-q99dn\") pod \"metallb-operator-webhook-server-7df7c8d444-xs67n\" (UID: \"ffe179b8-a1c8-430b-94f5-920aacf0defe\") " pod="metallb-system/metallb-operator-webhook-server-7df7c8d444-xs67n" Jan 29 16:22:43 crc kubenswrapper[4714]: I0129 16:22:43.923383 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ffe179b8-a1c8-430b-94f5-920aacf0defe-apiservice-cert\") pod \"metallb-operator-webhook-server-7df7c8d444-xs67n\" (UID: \"ffe179b8-a1c8-430b-94f5-920aacf0defe\") " pod="metallb-system/metallb-operator-webhook-server-7df7c8d444-xs67n" Jan 29 16:22:44 crc kubenswrapper[4714]: I0129 16:22:44.032468 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ffe179b8-a1c8-430b-94f5-920aacf0defe-apiservice-cert\") pod \"metallb-operator-webhook-server-7df7c8d444-xs67n\" (UID: \"ffe179b8-a1c8-430b-94f5-920aacf0defe\") " pod="metallb-system/metallb-operator-webhook-server-7df7c8d444-xs67n" Jan 29 16:22:44 crc kubenswrapper[4714]: I0129 16:22:44.032828 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ffe179b8-a1c8-430b-94f5-920aacf0defe-webhook-cert\") pod \"metallb-operator-webhook-server-7df7c8d444-xs67n\" (UID: \"ffe179b8-a1c8-430b-94f5-920aacf0defe\") " pod="metallb-system/metallb-operator-webhook-server-7df7c8d444-xs67n" Jan 29 16:22:44 crc kubenswrapper[4714]: I0129 16:22:44.032897 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q99dn\" (UniqueName: \"kubernetes.io/projected/ffe179b8-a1c8-430b-94f5-920aacf0defe-kube-api-access-q99dn\") pod \"metallb-operator-webhook-server-7df7c8d444-xs67n\" (UID: \"ffe179b8-a1c8-430b-94f5-920aacf0defe\") " pod="metallb-system/metallb-operator-webhook-server-7df7c8d444-xs67n" Jan 29 16:22:44 crc kubenswrapper[4714]: I0129 16:22:44.037703 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ffe179b8-a1c8-430b-94f5-920aacf0defe-webhook-cert\") pod \"metallb-operator-webhook-server-7df7c8d444-xs67n\" (UID: \"ffe179b8-a1c8-430b-94f5-920aacf0defe\") " pod="metallb-system/metallb-operator-webhook-server-7df7c8d444-xs67n" Jan 29 16:22:44 crc kubenswrapper[4714]: I0129 16:22:44.038486 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ffe179b8-a1c8-430b-94f5-920aacf0defe-apiservice-cert\") pod \"metallb-operator-webhook-server-7df7c8d444-xs67n\" (UID: \"ffe179b8-a1c8-430b-94f5-920aacf0defe\") " pod="metallb-system/metallb-operator-webhook-server-7df7c8d444-xs67n" Jan 29 16:22:44 crc kubenswrapper[4714]: I0129 16:22:44.051981 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q99dn\" (UniqueName: \"kubernetes.io/projected/ffe179b8-a1c8-430b-94f5-920aacf0defe-kube-api-access-q99dn\") pod \"metallb-operator-webhook-server-7df7c8d444-xs67n\" (UID: \"ffe179b8-a1c8-430b-94f5-920aacf0defe\") " pod="metallb-system/metallb-operator-webhook-server-7df7c8d444-xs67n" Jan 29 16:22:44 crc kubenswrapper[4714]: I0129 16:22:44.092379 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7df7c8d444-xs67n" Jan 29 16:22:44 crc kubenswrapper[4714]: I0129 16:22:44.116854 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-586b87b897-zpr4q"] Jan 29 16:22:44 crc kubenswrapper[4714]: W0129 16:22:44.136360 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod432a4f98_877c_4f7a_b2b0_ce273a77450a.slice/crio-c4c9bacbb5eb32277a55ef382bf3de55198d38b22f7af753bd79ad92079bf8d7 WatchSource:0}: Error finding container c4c9bacbb5eb32277a55ef382bf3de55198d38b22f7af753bd79ad92079bf8d7: Status 404 returned error can't find the container with id c4c9bacbb5eb32277a55ef382bf3de55198d38b22f7af753bd79ad92079bf8d7 Jan 29 16:22:44 crc kubenswrapper[4714]: I0129 16:22:44.146310 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9ttgj" Jan 29 16:22:44 crc kubenswrapper[4714]: I0129 16:22:44.146399 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9ttgj" Jan 29 16:22:44 crc kubenswrapper[4714]: I0129 16:22:44.330147 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-586b87b897-zpr4q" event={"ID":"432a4f98-877c-4f7a-b2b0-ce273a77450a","Type":"ContainerStarted","Data":"c4c9bacbb5eb32277a55ef382bf3de55198d38b22f7af753bd79ad92079bf8d7"} Jan 29 16:22:44 crc kubenswrapper[4714]: I0129 16:22:44.565944 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7df7c8d444-xs67n"] Jan 29 16:22:44 crc kubenswrapper[4714]: W0129 16:22:44.574966 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffe179b8_a1c8_430b_94f5_920aacf0defe.slice/crio-7d83d11710d4bdc281f2aed567ad9c874b0f2e6cbaee91d3ee9f2f812fe1cd00 WatchSource:0}: Error finding container 7d83d11710d4bdc281f2aed567ad9c874b0f2e6cbaee91d3ee9f2f812fe1cd00: Status 404 returned error can't find the container with id 7d83d11710d4bdc281f2aed567ad9c874b0f2e6cbaee91d3ee9f2f812fe1cd00 Jan 29 16:22:45 crc kubenswrapper[4714]: I0129 16:22:45.196910 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9ttgj" podUID="3b93b136-c182-4b20-89a4-1d61e1d2d03c" containerName="registry-server" probeResult="failure" output=< Jan 29 16:22:45 crc kubenswrapper[4714]: timeout: failed to connect service ":50051" within 1s Jan 29 16:22:45 crc kubenswrapper[4714]: > Jan 29 16:22:45 crc kubenswrapper[4714]: I0129 16:22:45.333905 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7df7c8d444-xs67n" event={"ID":"ffe179b8-a1c8-430b-94f5-920aacf0defe","Type":"ContainerStarted","Data":"7d83d11710d4bdc281f2aed567ad9c874b0f2e6cbaee91d3ee9f2f812fe1cd00"} Jan 29 16:22:54 crc kubenswrapper[4714]: I0129 16:22:54.201037 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9ttgj" Jan 29 16:22:54 crc kubenswrapper[4714]: I0129 16:22:54.256761 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9ttgj" Jan 29 16:22:54 crc kubenswrapper[4714]: I0129 16:22:54.458233 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9ttgj"] Jan 29 16:22:55 crc kubenswrapper[4714]: I0129 16:22:55.409048 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9ttgj" podUID="3b93b136-c182-4b20-89a4-1d61e1d2d03c" containerName="registry-server" containerID="cri-o://71891b59c66ba86ccba3be13ccefa3eee4b9a927ebc16ecd2fd3f3077bf4cab2" gracePeriod=2 Jan 29 16:22:56 crc kubenswrapper[4714]: I0129 16:22:56.422826 4714 generic.go:334] "Generic (PLEG): container finished" podID="3b93b136-c182-4b20-89a4-1d61e1d2d03c" containerID="71891b59c66ba86ccba3be13ccefa3eee4b9a927ebc16ecd2fd3f3077bf4cab2" exitCode=0 Jan 29 16:22:56 crc kubenswrapper[4714]: I0129 16:22:56.422878 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ttgj" event={"ID":"3b93b136-c182-4b20-89a4-1d61e1d2d03c","Type":"ContainerDied","Data":"71891b59c66ba86ccba3be13ccefa3eee4b9a927ebc16ecd2fd3f3077bf4cab2"} Jan 29 16:22:56 crc kubenswrapper[4714]: I0129 16:22:56.758421 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ttgj" Jan 29 16:22:56 crc kubenswrapper[4714]: I0129 16:22:56.915560 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b93b136-c182-4b20-89a4-1d61e1d2d03c-utilities\") pod \"3b93b136-c182-4b20-89a4-1d61e1d2d03c\" (UID: \"3b93b136-c182-4b20-89a4-1d61e1d2d03c\") " Jan 29 16:22:56 crc kubenswrapper[4714]: I0129 16:22:56.915674 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkw5r\" (UniqueName: \"kubernetes.io/projected/3b93b136-c182-4b20-89a4-1d61e1d2d03c-kube-api-access-xkw5r\") pod \"3b93b136-c182-4b20-89a4-1d61e1d2d03c\" (UID: \"3b93b136-c182-4b20-89a4-1d61e1d2d03c\") " Jan 29 16:22:56 crc kubenswrapper[4714]: I0129 16:22:56.915738 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b93b136-c182-4b20-89a4-1d61e1d2d03c-catalog-content\") pod \"3b93b136-c182-4b20-89a4-1d61e1d2d03c\" (UID: \"3b93b136-c182-4b20-89a4-1d61e1d2d03c\") " Jan 29 16:22:56 crc kubenswrapper[4714]: I0129 16:22:56.918969 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b93b136-c182-4b20-89a4-1d61e1d2d03c-utilities" (OuterVolumeSpecName: "utilities") pod "3b93b136-c182-4b20-89a4-1d61e1d2d03c" (UID: "3b93b136-c182-4b20-89a4-1d61e1d2d03c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:22:56 crc kubenswrapper[4714]: I0129 16:22:56.921216 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b93b136-c182-4b20-89a4-1d61e1d2d03c-kube-api-access-xkw5r" (OuterVolumeSpecName: "kube-api-access-xkw5r") pod "3b93b136-c182-4b20-89a4-1d61e1d2d03c" (UID: "3b93b136-c182-4b20-89a4-1d61e1d2d03c"). InnerVolumeSpecName "kube-api-access-xkw5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:57 crc kubenswrapper[4714]: I0129 16:22:57.017876 4714 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b93b136-c182-4b20-89a4-1d61e1d2d03c-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:57 crc kubenswrapper[4714]: I0129 16:22:57.017914 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkw5r\" (UniqueName: \"kubernetes.io/projected/3b93b136-c182-4b20-89a4-1d61e1d2d03c-kube-api-access-xkw5r\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:57 crc kubenswrapper[4714]: I0129 16:22:57.044234 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b93b136-c182-4b20-89a4-1d61e1d2d03c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b93b136-c182-4b20-89a4-1d61e1d2d03c" (UID: "3b93b136-c182-4b20-89a4-1d61e1d2d03c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:22:57 crc kubenswrapper[4714]: I0129 16:22:57.119305 4714 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b93b136-c182-4b20-89a4-1d61e1d2d03c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:57 crc kubenswrapper[4714]: I0129 16:22:57.431967 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7df7c8d444-xs67n" event={"ID":"ffe179b8-a1c8-430b-94f5-920aacf0defe","Type":"ContainerStarted","Data":"ebb19f1188d4740707487d9496402b9a5006a1e2b090cbe22f57054f22b1098a"} Jan 29 16:22:57 crc kubenswrapper[4714]: I0129 16:22:57.434928 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ttgj" event={"ID":"3b93b136-c182-4b20-89a4-1d61e1d2d03c","Type":"ContainerDied","Data":"4716263f07ca6d3ad4b5be1c8fb4857b1c649402f03ef208b20007f3c151a261"} Jan 29 16:22:57 crc kubenswrapper[4714]: I0129 16:22:57.435078 4714 scope.go:117] "RemoveContainer" containerID="71891b59c66ba86ccba3be13ccefa3eee4b9a927ebc16ecd2fd3f3077bf4cab2" Jan 29 16:22:57 crc kubenswrapper[4714]: I0129 16:22:57.435276 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ttgj" Jan 29 16:22:57 crc kubenswrapper[4714]: I0129 16:22:57.445225 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-586b87b897-zpr4q" event={"ID":"432a4f98-877c-4f7a-b2b0-ce273a77450a","Type":"ContainerStarted","Data":"8f2324e58296910aa2839a32886e6fbb898951f4aeaaf7bad1f63e19dd181e23"} Jan 29 16:22:57 crc kubenswrapper[4714]: I0129 16:22:57.458500 4714 scope.go:117] "RemoveContainer" containerID="6632448df0e09a8f0e5148074bcd76d6b08015e4e356eeae9a581a934dfb3cac" Jan 29 16:22:57 crc kubenswrapper[4714]: I0129 16:22:57.495673 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9ttgj"] Jan 29 16:22:57 crc kubenswrapper[4714]: I0129 16:22:57.501051 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9ttgj"] Jan 29 16:22:57 crc kubenswrapper[4714]: I0129 16:22:57.519172 4714 scope.go:117] "RemoveContainer" containerID="dc5891a8c29afb2826ce230ab3fd28a92cd17c413c6d5d58f4c0d944c000e9ef" Jan 29 16:22:57 crc kubenswrapper[4714]: I0129 16:22:57.844404 4714 patch_prober.go:28] interesting pod/machine-config-daemon-ppngk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:22:57 crc kubenswrapper[4714]: I0129 16:22:57.844472 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:22:58 crc kubenswrapper[4714]: I0129 16:22:58.194836 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b93b136-c182-4b20-89a4-1d61e1d2d03c" path="/var/lib/kubelet/pods/3b93b136-c182-4b20-89a4-1d61e1d2d03c/volumes" Jan 29 16:22:58 crc kubenswrapper[4714]: I0129 16:22:58.455033 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-586b87b897-zpr4q" Jan 29 16:22:58 crc kubenswrapper[4714]: I0129 16:22:58.455113 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7df7c8d444-xs67n" Jan 29 16:22:58 crc kubenswrapper[4714]: I0129 16:22:58.631463 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7df7c8d444-xs67n" podStartSLOduration=3.298912123 podStartE2EDuration="15.6314476s" podCreationTimestamp="2026-01-29 16:22:43 +0000 UTC" firstStartedPulling="2026-01-29 16:22:44.578312013 +0000 UTC m=+771.098813133" lastFinishedPulling="2026-01-29 16:22:56.91084749 +0000 UTC m=+783.431348610" observedRunningTime="2026-01-29 16:22:58.627840338 +0000 UTC m=+785.148341478" watchObservedRunningTime="2026-01-29 16:22:58.6314476 +0000 UTC m=+785.151948720" Jan 29 16:22:58 crc kubenswrapper[4714]: I0129 16:22:58.660797 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-586b87b897-zpr4q" podStartSLOduration=2.916059058 podStartE2EDuration="15.660776292s" podCreationTimestamp="2026-01-29 16:22:43 +0000 UTC" firstStartedPulling="2026-01-29 16:22:44.143196953 +0000 UTC m=+770.663698063" lastFinishedPulling="2026-01-29 16:22:56.887914167 +0000 UTC m=+783.408415297" observedRunningTime="2026-01-29 16:22:58.656557851 +0000 UTC m=+785.177058991" watchObservedRunningTime="2026-01-29 16:22:58.660776292 +0000 UTC m=+785.181277422" Jan 29 16:23:14 crc kubenswrapper[4714]: I0129 16:23:14.096866 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7df7c8d444-xs67n" Jan 29 16:23:27 crc kubenswrapper[4714]: I0129 16:23:27.844532 4714 patch_prober.go:28] interesting pod/machine-config-daemon-ppngk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:23:27 crc kubenswrapper[4714]: I0129 16:23:27.845073 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:23:33 crc kubenswrapper[4714]: I0129 16:23:33.863483 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-586b87b897-zpr4q" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.643197 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk79r"] Jan 29 16:23:34 crc kubenswrapper[4714]: E0129 16:23:34.643672 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b93b136-c182-4b20-89a4-1d61e1d2d03c" containerName="extract-content" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.643689 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b93b136-c182-4b20-89a4-1d61e1d2d03c" containerName="extract-content" Jan 29 16:23:34 crc kubenswrapper[4714]: E0129 16:23:34.643706 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b93b136-c182-4b20-89a4-1d61e1d2d03c" containerName="extract-utilities" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.643713 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b93b136-c182-4b20-89a4-1d61e1d2d03c" containerName="extract-utilities" Jan 29 16:23:34 crc kubenswrapper[4714]: E0129 16:23:34.643722 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b93b136-c182-4b20-89a4-1d61e1d2d03c" containerName="registry-server" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.643728 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b93b136-c182-4b20-89a4-1d61e1d2d03c" containerName="registry-server" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.643820 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b93b136-c182-4b20-89a4-1d61e1d2d03c" containerName="registry-server" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.644176 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk79r" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.650305 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-59pmz"] Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.652224 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-59pmz" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.661358 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk79r"] Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.661980 4714 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-kpj2m" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.662382 4714 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.662671 4714 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.662978 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.737192 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-7mmsh"] Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.738018 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7mmsh" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.741944 4714 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.741964 4714 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.742152 4714 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-dh498" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.742156 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.749338 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bbfcf92-8a27-4ba0-9017-7c36906791c8-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-kk79r\" (UID: \"9bbfcf92-8a27-4ba0-9017-7c36906791c8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk79r" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.749378 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a97dd473-5873-4aa1-9166-f7a0c6581be1-metrics-certs\") pod \"frr-k8s-59pmz\" (UID: \"a97dd473-5873-4aa1-9166-f7a0c6581be1\") " pod="metallb-system/frr-k8s-59pmz" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.749424 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a97dd473-5873-4aa1-9166-f7a0c6581be1-frr-conf\") pod \"frr-k8s-59pmz\" (UID: \"a97dd473-5873-4aa1-9166-f7a0c6581be1\") " pod="metallb-system/frr-k8s-59pmz" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.749463 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgr67\" (UniqueName: \"kubernetes.io/projected/a97dd473-5873-4aa1-9166-f7a0c6581be1-kube-api-access-bgr67\") pod \"frr-k8s-59pmz\" (UID: \"a97dd473-5873-4aa1-9166-f7a0c6581be1\") " pod="metallb-system/frr-k8s-59pmz" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.749511 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvvtv\" (UniqueName: \"kubernetes.io/projected/9bbfcf92-8a27-4ba0-9017-7c36906791c8-kube-api-access-cvvtv\") pod \"frr-k8s-webhook-server-7df86c4f6c-kk79r\" (UID: \"9bbfcf92-8a27-4ba0-9017-7c36906791c8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk79r" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.749532 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a97dd473-5873-4aa1-9166-f7a0c6581be1-reloader\") pod \"frr-k8s-59pmz\" (UID: \"a97dd473-5873-4aa1-9166-f7a0c6581be1\") " pod="metallb-system/frr-k8s-59pmz" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.749617 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a97dd473-5873-4aa1-9166-f7a0c6581be1-metrics\") pod \"frr-k8s-59pmz\" (UID: \"a97dd473-5873-4aa1-9166-f7a0c6581be1\") " pod="metallb-system/frr-k8s-59pmz" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.749717 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a97dd473-5873-4aa1-9166-f7a0c6581be1-frr-sockets\") pod \"frr-k8s-59pmz\" (UID: \"a97dd473-5873-4aa1-9166-f7a0c6581be1\") " pod="metallb-system/frr-k8s-59pmz" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.749821 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a97dd473-5873-4aa1-9166-f7a0c6581be1-frr-startup\") pod \"frr-k8s-59pmz\" (UID: \"a97dd473-5873-4aa1-9166-f7a0c6581be1\") " pod="metallb-system/frr-k8s-59pmz" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.755509 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-m26zh"] Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.756277 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-m26zh" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.757958 4714 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.768854 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-m26zh"] Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.851310 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a97dd473-5873-4aa1-9166-f7a0c6581be1-reloader\") pod \"frr-k8s-59pmz\" (UID: \"a97dd473-5873-4aa1-9166-f7a0c6581be1\") " pod="metallb-system/frr-k8s-59pmz" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.851561 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a97dd473-5873-4aa1-9166-f7a0c6581be1-metrics\") pod \"frr-k8s-59pmz\" (UID: \"a97dd473-5873-4aa1-9166-f7a0c6581be1\") " pod="metallb-system/frr-k8s-59pmz" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.851646 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a97dd473-5873-4aa1-9166-f7a0c6581be1-frr-sockets\") pod \"frr-k8s-59pmz\" (UID: \"a97dd473-5873-4aa1-9166-f7a0c6581be1\") " pod="metallb-system/frr-k8s-59pmz" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.851719 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78b34628-144f-416a-b493-15ba445caa48-cert\") pod \"controller-6968d8fdc4-m26zh\" (UID: \"78b34628-144f-416a-b493-15ba445caa48\") " pod="metallb-system/controller-6968d8fdc4-m26zh" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.851793 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78b34628-144f-416a-b493-15ba445caa48-metrics-certs\") pod \"controller-6968d8fdc4-m26zh\" (UID: \"78b34628-144f-416a-b493-15ba445caa48\") " pod="metallb-system/controller-6968d8fdc4-m26zh" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.851870 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k9wr\" (UniqueName: \"kubernetes.io/projected/813f735d-8336-49e9-b018-e6dbf74ddc99-kube-api-access-2k9wr\") pod \"speaker-7mmsh\" (UID: \"813f735d-8336-49e9-b018-e6dbf74ddc99\") " pod="metallb-system/speaker-7mmsh" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.851821 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a97dd473-5873-4aa1-9166-f7a0c6581be1-reloader\") pod \"frr-k8s-59pmz\" (UID: \"a97dd473-5873-4aa1-9166-f7a0c6581be1\") " pod="metallb-system/frr-k8s-59pmz" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.851967 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a97dd473-5873-4aa1-9166-f7a0c6581be1-frr-sockets\") pod \"frr-k8s-59pmz\" (UID: \"a97dd473-5873-4aa1-9166-f7a0c6581be1\") " pod="metallb-system/frr-k8s-59pmz" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.852041 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/813f735d-8336-49e9-b018-e6dbf74ddc99-memberlist\") pod \"speaker-7mmsh\" (UID: \"813f735d-8336-49e9-b018-e6dbf74ddc99\") " pod="metallb-system/speaker-7mmsh" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.852115 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kjbj\" (UniqueName: \"kubernetes.io/projected/78b34628-144f-416a-b493-15ba445caa48-kube-api-access-5kjbj\") pod \"controller-6968d8fdc4-m26zh\" (UID: \"78b34628-144f-416a-b493-15ba445caa48\") " pod="metallb-system/controller-6968d8fdc4-m26zh" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.852185 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a97dd473-5873-4aa1-9166-f7a0c6581be1-frr-startup\") pod \"frr-k8s-59pmz\" (UID: \"a97dd473-5873-4aa1-9166-f7a0c6581be1\") " pod="metallb-system/frr-k8s-59pmz" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.852273 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bbfcf92-8a27-4ba0-9017-7c36906791c8-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-kk79r\" (UID: \"9bbfcf92-8a27-4ba0-9017-7c36906791c8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk79r" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.852338 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a97dd473-5873-4aa1-9166-f7a0c6581be1-metrics-certs\") pod \"frr-k8s-59pmz\" (UID: \"a97dd473-5873-4aa1-9166-f7a0c6581be1\") " pod="metallb-system/frr-k8s-59pmz" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.852226 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a97dd473-5873-4aa1-9166-f7a0c6581be1-metrics\") pod \"frr-k8s-59pmz\" (UID: \"a97dd473-5873-4aa1-9166-f7a0c6581be1\") " pod="metallb-system/frr-k8s-59pmz" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.852460 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/813f735d-8336-49e9-b018-e6dbf74ddc99-metrics-certs\") pod \"speaker-7mmsh\" (UID: \"813f735d-8336-49e9-b018-e6dbf74ddc99\") " pod="metallb-system/speaker-7mmsh" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.852544 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a97dd473-5873-4aa1-9166-f7a0c6581be1-frr-conf\") pod \"frr-k8s-59pmz\" (UID: \"a97dd473-5873-4aa1-9166-f7a0c6581be1\") " pod="metallb-system/frr-k8s-59pmz" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.852624 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgr67\" (UniqueName: \"kubernetes.io/projected/a97dd473-5873-4aa1-9166-f7a0c6581be1-kube-api-access-bgr67\") pod \"frr-k8s-59pmz\" (UID: \"a97dd473-5873-4aa1-9166-f7a0c6581be1\") " pod="metallb-system/frr-k8s-59pmz" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.852700 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/813f735d-8336-49e9-b018-e6dbf74ddc99-metallb-excludel2\") pod \"speaker-7mmsh\" (UID: \"813f735d-8336-49e9-b018-e6dbf74ddc99\") " pod="metallb-system/speaker-7mmsh" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.852799 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvvtv\" (UniqueName: \"kubernetes.io/projected/9bbfcf92-8a27-4ba0-9017-7c36906791c8-kube-api-access-cvvtv\") pod \"frr-k8s-webhook-server-7df86c4f6c-kk79r\" (UID: \"9bbfcf92-8a27-4ba0-9017-7c36906791c8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk79r" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.852802 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a97dd473-5873-4aa1-9166-f7a0c6581be1-frr-conf\") pod \"frr-k8s-59pmz\" (UID: \"a97dd473-5873-4aa1-9166-f7a0c6581be1\") " pod="metallb-system/frr-k8s-59pmz" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.853502 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a97dd473-5873-4aa1-9166-f7a0c6581be1-frr-startup\") pod \"frr-k8s-59pmz\" (UID: \"a97dd473-5873-4aa1-9166-f7a0c6581be1\") " pod="metallb-system/frr-k8s-59pmz" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.861595 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a97dd473-5873-4aa1-9166-f7a0c6581be1-metrics-certs\") pod \"frr-k8s-59pmz\" (UID: \"a97dd473-5873-4aa1-9166-f7a0c6581be1\") " pod="metallb-system/frr-k8s-59pmz" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.862332 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bbfcf92-8a27-4ba0-9017-7c36906791c8-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-kk79r\" (UID: \"9bbfcf92-8a27-4ba0-9017-7c36906791c8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk79r" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.867801 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvvtv\" (UniqueName: \"kubernetes.io/projected/9bbfcf92-8a27-4ba0-9017-7c36906791c8-kube-api-access-cvvtv\") pod \"frr-k8s-webhook-server-7df86c4f6c-kk79r\" (UID: \"9bbfcf92-8a27-4ba0-9017-7c36906791c8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk79r" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.868595 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgr67\" (UniqueName: \"kubernetes.io/projected/a97dd473-5873-4aa1-9166-f7a0c6581be1-kube-api-access-bgr67\") pod \"frr-k8s-59pmz\" (UID: \"a97dd473-5873-4aa1-9166-f7a0c6581be1\") " pod="metallb-system/frr-k8s-59pmz" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.954440 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/813f735d-8336-49e9-b018-e6dbf74ddc99-metrics-certs\") pod \"speaker-7mmsh\" (UID: \"813f735d-8336-49e9-b018-e6dbf74ddc99\") " pod="metallb-system/speaker-7mmsh" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.954503 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/813f735d-8336-49e9-b018-e6dbf74ddc99-metallb-excludel2\") pod \"speaker-7mmsh\" (UID: \"813f735d-8336-49e9-b018-e6dbf74ddc99\") " pod="metallb-system/speaker-7mmsh" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.954541 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78b34628-144f-416a-b493-15ba445caa48-cert\") pod \"controller-6968d8fdc4-m26zh\" (UID: \"78b34628-144f-416a-b493-15ba445caa48\") " pod="metallb-system/controller-6968d8fdc4-m26zh" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.954564 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78b34628-144f-416a-b493-15ba445caa48-metrics-certs\") pod \"controller-6968d8fdc4-m26zh\" (UID: \"78b34628-144f-416a-b493-15ba445caa48\") " pod="metallb-system/controller-6968d8fdc4-m26zh" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.954586 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k9wr\" (UniqueName: \"kubernetes.io/projected/813f735d-8336-49e9-b018-e6dbf74ddc99-kube-api-access-2k9wr\") pod \"speaker-7mmsh\" (UID: \"813f735d-8336-49e9-b018-e6dbf74ddc99\") " pod="metallb-system/speaker-7mmsh" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.954615 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/813f735d-8336-49e9-b018-e6dbf74ddc99-memberlist\") pod \"speaker-7mmsh\" (UID: \"813f735d-8336-49e9-b018-e6dbf74ddc99\") " pod="metallb-system/speaker-7mmsh" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.954636 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kjbj\" (UniqueName: \"kubernetes.io/projected/78b34628-144f-416a-b493-15ba445caa48-kube-api-access-5kjbj\") pod \"controller-6968d8fdc4-m26zh\" (UID: \"78b34628-144f-416a-b493-15ba445caa48\") " pod="metallb-system/controller-6968d8fdc4-m26zh" Jan 29 16:23:34 crc kubenswrapper[4714]: E0129 16:23:34.955047 4714 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 16:23:34 crc kubenswrapper[4714]: E0129 16:23:34.955098 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/813f735d-8336-49e9-b018-e6dbf74ddc99-memberlist podName:813f735d-8336-49e9-b018-e6dbf74ddc99 nodeName:}" failed. No retries permitted until 2026-01-29 16:23:35.455081671 +0000 UTC m=+821.975582791 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/813f735d-8336-49e9-b018-e6dbf74ddc99-memberlist") pod "speaker-7mmsh" (UID: "813f735d-8336-49e9-b018-e6dbf74ddc99") : secret "metallb-memberlist" not found Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.955816 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/813f735d-8336-49e9-b018-e6dbf74ddc99-metallb-excludel2\") pod \"speaker-7mmsh\" (UID: \"813f735d-8336-49e9-b018-e6dbf74ddc99\") " pod="metallb-system/speaker-7mmsh" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.957212 4714 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.959526 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/813f735d-8336-49e9-b018-e6dbf74ddc99-metrics-certs\") pod \"speaker-7mmsh\" (UID: \"813f735d-8336-49e9-b018-e6dbf74ddc99\") " pod="metallb-system/speaker-7mmsh" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.960010 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78b34628-144f-416a-b493-15ba445caa48-metrics-certs\") pod \"controller-6968d8fdc4-m26zh\" (UID: \"78b34628-144f-416a-b493-15ba445caa48\") " pod="metallb-system/controller-6968d8fdc4-m26zh" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.967043 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk79r" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.973871 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k9wr\" (UniqueName: \"kubernetes.io/projected/813f735d-8336-49e9-b018-e6dbf74ddc99-kube-api-access-2k9wr\") pod \"speaker-7mmsh\" (UID: \"813f735d-8336-49e9-b018-e6dbf74ddc99\") " pod="metallb-system/speaker-7mmsh" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.975199 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kjbj\" (UniqueName: \"kubernetes.io/projected/78b34628-144f-416a-b493-15ba445caa48-kube-api-access-5kjbj\") pod \"controller-6968d8fdc4-m26zh\" (UID: \"78b34628-144f-416a-b493-15ba445caa48\") " pod="metallb-system/controller-6968d8fdc4-m26zh" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.975608 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-59pmz" Jan 29 16:23:34 crc kubenswrapper[4714]: I0129 16:23:34.976289 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78b34628-144f-416a-b493-15ba445caa48-cert\") pod \"controller-6968d8fdc4-m26zh\" (UID: \"78b34628-144f-416a-b493-15ba445caa48\") " pod="metallb-system/controller-6968d8fdc4-m26zh" Jan 29 16:23:35 crc kubenswrapper[4714]: I0129 16:23:35.072161 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-m26zh" Jan 29 16:23:35 crc kubenswrapper[4714]: I0129 16:23:35.328255 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-m26zh"] Jan 29 16:23:35 crc kubenswrapper[4714]: I0129 16:23:35.462318 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/813f735d-8336-49e9-b018-e6dbf74ddc99-memberlist\") pod \"speaker-7mmsh\" (UID: \"813f735d-8336-49e9-b018-e6dbf74ddc99\") " pod="metallb-system/speaker-7mmsh" Jan 29 16:23:35 crc kubenswrapper[4714]: E0129 16:23:35.462490 4714 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 16:23:35 crc kubenswrapper[4714]: E0129 16:23:35.462564 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/813f735d-8336-49e9-b018-e6dbf74ddc99-memberlist podName:813f735d-8336-49e9-b018-e6dbf74ddc99 nodeName:}" failed. No retries permitted until 2026-01-29 16:23:36.462549627 +0000 UTC m=+822.983050747 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/813f735d-8336-49e9-b018-e6dbf74ddc99-memberlist") pod "speaker-7mmsh" (UID: "813f735d-8336-49e9-b018-e6dbf74ddc99") : secret "metallb-memberlist" not found Jan 29 16:23:35 crc kubenswrapper[4714]: I0129 16:23:35.469384 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk79r"] Jan 29 16:23:35 crc kubenswrapper[4714]: W0129 16:23:35.473345 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bbfcf92_8a27_4ba0_9017_7c36906791c8.slice/crio-eb293dcb532fa6a0e844c468662976cc97b3ffe233e3d39cbd3ad550635645a0 WatchSource:0}: Error finding container eb293dcb532fa6a0e844c468662976cc97b3ffe233e3d39cbd3ad550635645a0: Status 404 returned error can't find the container with id eb293dcb532fa6a0e844c468662976cc97b3ffe233e3d39cbd3ad550635645a0 Jan 29 16:23:35 crc kubenswrapper[4714]: I0129 16:23:35.682082 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk79r" event={"ID":"9bbfcf92-8a27-4ba0-9017-7c36906791c8","Type":"ContainerStarted","Data":"eb293dcb532fa6a0e844c468662976cc97b3ffe233e3d39cbd3ad550635645a0"} Jan 29 16:23:35 crc kubenswrapper[4714]: I0129 16:23:35.683846 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-m26zh" event={"ID":"78b34628-144f-416a-b493-15ba445caa48","Type":"ContainerStarted","Data":"42678d585a93ee4e807989c839b6083070ee10ae0eaf4c398d2cbf846d9e2bf9"} Jan 29 16:23:35 crc kubenswrapper[4714]: I0129 16:23:35.683880 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-m26zh" event={"ID":"78b34628-144f-416a-b493-15ba445caa48","Type":"ContainerStarted","Data":"d65afc9c64b6337e908d8e47d9121e80cfa91d9056cb7eb9f807063d779583ac"} Jan 29 16:23:35 crc kubenswrapper[4714]: I0129 16:23:35.684796 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-59pmz" event={"ID":"a97dd473-5873-4aa1-9166-f7a0c6581be1","Type":"ContainerStarted","Data":"54e1ecd7827f449b3ac55f819a0ff9fe56bcf3d7e18090da24e66a24fe177b4c"} Jan 29 16:23:36 crc kubenswrapper[4714]: I0129 16:23:36.477022 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/813f735d-8336-49e9-b018-e6dbf74ddc99-memberlist\") pod \"speaker-7mmsh\" (UID: \"813f735d-8336-49e9-b018-e6dbf74ddc99\") " pod="metallb-system/speaker-7mmsh" Jan 29 16:23:36 crc kubenswrapper[4714]: I0129 16:23:36.482717 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/813f735d-8336-49e9-b018-e6dbf74ddc99-memberlist\") pod \"speaker-7mmsh\" (UID: \"813f735d-8336-49e9-b018-e6dbf74ddc99\") " pod="metallb-system/speaker-7mmsh" Jan 29 16:23:36 crc kubenswrapper[4714]: I0129 16:23:36.551175 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7mmsh" Jan 29 16:23:36 crc kubenswrapper[4714]: I0129 16:23:36.692543 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7mmsh" event={"ID":"813f735d-8336-49e9-b018-e6dbf74ddc99","Type":"ContainerStarted","Data":"1d545643ff622961dc3f0863b6328e6e18150121c23013cbe42045322f9e36f3"} Jan 29 16:23:37 crc kubenswrapper[4714]: I0129 16:23:37.705787 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7mmsh" event={"ID":"813f735d-8336-49e9-b018-e6dbf74ddc99","Type":"ContainerStarted","Data":"aaa70c96c3fba2bd7a53fe7ad0d780d5d36803534760758c3d9bfc9dd4187178"} Jan 29 16:23:39 crc kubenswrapper[4714]: I0129 16:23:39.717278 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-m26zh" event={"ID":"78b34628-144f-416a-b493-15ba445caa48","Type":"ContainerStarted","Data":"c3d0e7f3954182f3b505afdfd5742def976e5a1df65e4d6a4dd928cfc467edf4"} Jan 29 16:23:39 crc kubenswrapper[4714]: I0129 16:23:39.717633 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-m26zh" Jan 29 16:23:39 crc kubenswrapper[4714]: I0129 16:23:39.719781 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7mmsh" event={"ID":"813f735d-8336-49e9-b018-e6dbf74ddc99","Type":"ContainerStarted","Data":"6aba26b9b5b0b50b75e9b9e6ceda106719062ca4e6121e67474c51352a428a90"} Jan 29 16:23:39 crc kubenswrapper[4714]: I0129 16:23:39.719958 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-7mmsh" Jan 29 16:23:39 crc kubenswrapper[4714]: I0129 16:23:39.736998 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-m26zh" podStartSLOduration=2.136871459 podStartE2EDuration="5.73697936s" podCreationTimestamp="2026-01-29 16:23:34 +0000 UTC" firstStartedPulling="2026-01-29 16:23:35.429802619 +0000 UTC m=+821.950303739" lastFinishedPulling="2026-01-29 16:23:39.02991052 +0000 UTC m=+825.550411640" observedRunningTime="2026-01-29 16:23:39.733837733 +0000 UTC m=+826.254338853" watchObservedRunningTime="2026-01-29 16:23:39.73697936 +0000 UTC m=+826.257480480" Jan 29 16:23:39 crc kubenswrapper[4714]: I0129 16:23:39.758580 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-7mmsh" podStartSLOduration=3.646873782 podStartE2EDuration="5.758566501s" podCreationTimestamp="2026-01-29 16:23:34 +0000 UTC" firstStartedPulling="2026-01-29 16:23:36.93685673 +0000 UTC m=+823.457357850" lastFinishedPulling="2026-01-29 16:23:39.048549449 +0000 UTC m=+825.569050569" observedRunningTime="2026-01-29 16:23:39.758429757 +0000 UTC m=+826.278930887" watchObservedRunningTime="2026-01-29 16:23:39.758566501 +0000 UTC m=+826.279067621" Jan 29 16:23:42 crc kubenswrapper[4714]: I0129 16:23:42.736977 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk79r" event={"ID":"9bbfcf92-8a27-4ba0-9017-7c36906791c8","Type":"ContainerStarted","Data":"df1c444d10af60503ec695ecd526e96e371ce9d52cac5487bf24f6a493a8956d"} Jan 29 16:23:42 crc kubenswrapper[4714]: I0129 16:23:42.737348 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk79r" Jan 29 16:23:42 crc kubenswrapper[4714]: I0129 16:23:42.739539 4714 generic.go:334] "Generic (PLEG): container finished" podID="a97dd473-5873-4aa1-9166-f7a0c6581be1" containerID="e8eb896c49334701cb329d19d54e164c9ae10325bd6a724e508b54dbaad17ad0" exitCode=0 Jan 29 16:23:42 crc kubenswrapper[4714]: I0129 16:23:42.739593 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-59pmz" event={"ID":"a97dd473-5873-4aa1-9166-f7a0c6581be1","Type":"ContainerDied","Data":"e8eb896c49334701cb329d19d54e164c9ae10325bd6a724e508b54dbaad17ad0"} Jan 29 16:23:42 crc kubenswrapper[4714]: I0129 16:23:42.760311 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk79r" podStartSLOduration=1.71547556 podStartE2EDuration="8.760279489s" podCreationTimestamp="2026-01-29 16:23:34 +0000 UTC" firstStartedPulling="2026-01-29 16:23:35.476622395 +0000 UTC m=+821.997123515" lastFinishedPulling="2026-01-29 16:23:42.521426314 +0000 UTC m=+829.041927444" observedRunningTime="2026-01-29 16:23:42.756689967 +0000 UTC m=+829.277191117" watchObservedRunningTime="2026-01-29 16:23:42.760279489 +0000 UTC m=+829.280780669" Jan 29 16:23:43 crc kubenswrapper[4714]: I0129 16:23:43.750057 4714 generic.go:334] "Generic (PLEG): container finished" podID="a97dd473-5873-4aa1-9166-f7a0c6581be1" containerID="bb039282043c0e2e27e8fbb3aaa72038d17c43384473549fe668defe9937c3bd" exitCode=0 Jan 29 16:23:43 crc kubenswrapper[4714]: I0129 16:23:43.750204 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-59pmz" event={"ID":"a97dd473-5873-4aa1-9166-f7a0c6581be1","Type":"ContainerDied","Data":"bb039282043c0e2e27e8fbb3aaa72038d17c43384473549fe668defe9937c3bd"} Jan 29 16:23:44 crc kubenswrapper[4714]: I0129 16:23:44.759145 4714 generic.go:334] "Generic (PLEG): container finished" podID="a97dd473-5873-4aa1-9166-f7a0c6581be1" containerID="0d159d7890f1bc21ee4bf02da74cfc8ebcc219bc17c61ccca4fb06b34bd89880" exitCode=0 Jan 29 16:23:44 crc kubenswrapper[4714]: I0129 16:23:44.759303 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-59pmz" event={"ID":"a97dd473-5873-4aa1-9166-f7a0c6581be1","Type":"ContainerDied","Data":"0d159d7890f1bc21ee4bf02da74cfc8ebcc219bc17c61ccca4fb06b34bd89880"} Jan 29 16:23:45 crc kubenswrapper[4714]: I0129 16:23:45.090143 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-m26zh" Jan 29 16:23:45 crc kubenswrapper[4714]: I0129 16:23:45.774997 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-59pmz" event={"ID":"a97dd473-5873-4aa1-9166-f7a0c6581be1","Type":"ContainerStarted","Data":"e6b47835a151d644b4b184747c5be8a671e14885f81ea0d64c4a2f8066c1c93d"} Jan 29 16:23:45 crc kubenswrapper[4714]: I0129 16:23:45.775063 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-59pmz" event={"ID":"a97dd473-5873-4aa1-9166-f7a0c6581be1","Type":"ContainerStarted","Data":"42f637cc9a9cb1a567464df9bf350b9575f31b124eb87d2a9f41faf4265d58ce"} Jan 29 16:23:45 crc kubenswrapper[4714]: I0129 16:23:45.775079 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-59pmz" event={"ID":"a97dd473-5873-4aa1-9166-f7a0c6581be1","Type":"ContainerStarted","Data":"e4eb656ef2f36a80ede4736e536c5a4d89016f3180165779fe81680c7e88dda5"} Jan 29 16:23:45 crc kubenswrapper[4714]: I0129 16:23:45.775092 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-59pmz" event={"ID":"a97dd473-5873-4aa1-9166-f7a0c6581be1","Type":"ContainerStarted","Data":"b86c31da87199869693e3afa55f5db0defdbe590c0ca1e39d3d2937bacc30075"} Jan 29 16:23:45 crc kubenswrapper[4714]: I0129 16:23:45.775104 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-59pmz" event={"ID":"a97dd473-5873-4aa1-9166-f7a0c6581be1","Type":"ContainerStarted","Data":"896a7e5790b8e818708c993aca3b3cacb73f13ca0a6340e7992bf210ae77e6c7"} Jan 29 16:23:46 crc kubenswrapper[4714]: I0129 16:23:46.556904 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-7mmsh" Jan 29 16:23:46 crc kubenswrapper[4714]: I0129 16:23:46.792578 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-59pmz" event={"ID":"a97dd473-5873-4aa1-9166-f7a0c6581be1","Type":"ContainerStarted","Data":"9168595a16c083d33f31bdc99300f584dd37b304de01c3c16e0c277103745a6e"} Jan 29 16:23:46 crc kubenswrapper[4714]: I0129 16:23:46.792923 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-59pmz" Jan 29 16:23:46 crc kubenswrapper[4714]: I0129 16:23:46.836042 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-59pmz" podStartSLOduration=5.49155512 podStartE2EDuration="12.836022306s" podCreationTimestamp="2026-01-29 16:23:34 +0000 UTC" firstStartedPulling="2026-01-29 16:23:35.157647228 +0000 UTC m=+821.678148348" lastFinishedPulling="2026-01-29 16:23:42.502114414 +0000 UTC m=+829.022615534" observedRunningTime="2026-01-29 16:23:46.834169008 +0000 UTC m=+833.354670188" watchObservedRunningTime="2026-01-29 16:23:46.836022306 +0000 UTC m=+833.356523426" Jan 29 16:23:49 crc kubenswrapper[4714]: I0129 16:23:49.976774 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-59pmz" Jan 29 16:23:50 crc kubenswrapper[4714]: I0129 16:23:50.046594 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-59pmz" Jan 29 16:23:52 crc kubenswrapper[4714]: I0129 16:23:52.526144 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-jxfdk"] Jan 29 16:23:52 crc kubenswrapper[4714]: I0129 16:23:52.527197 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-jxfdk" Jan 29 16:23:52 crc kubenswrapper[4714]: I0129 16:23:52.530645 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-index-dockercfg-zfxjm" Jan 29 16:23:52 crc kubenswrapper[4714]: I0129 16:23:52.530877 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 29 16:23:52 crc kubenswrapper[4714]: I0129 16:23:52.531015 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 29 16:23:52 crc kubenswrapper[4714]: I0129 16:23:52.592671 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-jxfdk"] Jan 29 16:23:52 crc kubenswrapper[4714]: I0129 16:23:52.631035 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtkk7\" (UniqueName: \"kubernetes.io/projected/c73614a5-aed3-4942-9382-2981e22773ec-kube-api-access-wtkk7\") pod \"mariadb-operator-index-jxfdk\" (UID: \"c73614a5-aed3-4942-9382-2981e22773ec\") " pod="openstack-operators/mariadb-operator-index-jxfdk" Jan 29 16:23:52 crc kubenswrapper[4714]: I0129 16:23:52.731855 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtkk7\" (UniqueName: \"kubernetes.io/projected/c73614a5-aed3-4942-9382-2981e22773ec-kube-api-access-wtkk7\") pod \"mariadb-operator-index-jxfdk\" (UID: \"c73614a5-aed3-4942-9382-2981e22773ec\") " pod="openstack-operators/mariadb-operator-index-jxfdk" Jan 29 16:23:52 crc kubenswrapper[4714]: I0129 16:23:52.751858 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtkk7\" (UniqueName: \"kubernetes.io/projected/c73614a5-aed3-4942-9382-2981e22773ec-kube-api-access-wtkk7\") pod \"mariadb-operator-index-jxfdk\" (UID: \"c73614a5-aed3-4942-9382-2981e22773ec\") " pod="openstack-operators/mariadb-operator-index-jxfdk" Jan 29 16:23:52 crc kubenswrapper[4714]: I0129 16:23:52.846678 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-jxfdk" Jan 29 16:23:53 crc kubenswrapper[4714]: I0129 16:23:53.018698 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-jxfdk"] Jan 29 16:23:53 crc kubenswrapper[4714]: W0129 16:23:53.026780 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc73614a5_aed3_4942_9382_2981e22773ec.slice/crio-590a2584ed40d70665967956dedd365e20ebaf918ed6a882e280c2a862bb3df9 WatchSource:0}: Error finding container 590a2584ed40d70665967956dedd365e20ebaf918ed6a882e280c2a862bb3df9: Status 404 returned error can't find the container with id 590a2584ed40d70665967956dedd365e20ebaf918ed6a882e280c2a862bb3df9 Jan 29 16:23:53 crc kubenswrapper[4714]: I0129 16:23:53.837603 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-jxfdk" event={"ID":"c73614a5-aed3-4942-9382-2981e22773ec","Type":"ContainerStarted","Data":"590a2584ed40d70665967956dedd365e20ebaf918ed6a882e280c2a862bb3df9"} Jan 29 16:23:54 crc kubenswrapper[4714]: I0129 16:23:54.846704 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-jxfdk" event={"ID":"c73614a5-aed3-4942-9382-2981e22773ec","Type":"ContainerStarted","Data":"e13c954e2ceec860ed151e1275323a23d1c2600cd51f3926408e9e8970fbca50"} Jan 29 16:23:54 crc kubenswrapper[4714]: I0129 16:23:54.866693 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-jxfdk" podStartSLOduration=1.963355862 podStartE2EDuration="2.866675844s" podCreationTimestamp="2026-01-29 16:23:52 +0000 UTC" firstStartedPulling="2026-01-29 16:23:53.030441269 +0000 UTC m=+839.550942389" lastFinishedPulling="2026-01-29 16:23:53.933761251 +0000 UTC m=+840.454262371" observedRunningTime="2026-01-29 16:23:54.865537328 +0000 UTC m=+841.386038458" watchObservedRunningTime="2026-01-29 16:23:54.866675844 +0000 UTC m=+841.387176954" Jan 29 16:23:54 crc kubenswrapper[4714]: I0129 16:23:54.975576 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kk79r" Jan 29 16:23:54 crc kubenswrapper[4714]: I0129 16:23:54.978766 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-59pmz" Jan 29 16:23:55 crc kubenswrapper[4714]: I0129 16:23:55.911220 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-jxfdk"] Jan 29 16:23:56 crc kubenswrapper[4714]: I0129 16:23:56.518771 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-l6dkm"] Jan 29 16:23:56 crc kubenswrapper[4714]: I0129 16:23:56.519887 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-l6dkm" Jan 29 16:23:56 crc kubenswrapper[4714]: I0129 16:23:56.532508 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-l6dkm"] Jan 29 16:23:56 crc kubenswrapper[4714]: I0129 16:23:56.688993 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r4sv\" (UniqueName: \"kubernetes.io/projected/d990dfb7-e078-4c7e-8e98-40b10f062a04-kube-api-access-8r4sv\") pod \"mariadb-operator-index-l6dkm\" (UID: \"d990dfb7-e078-4c7e-8e98-40b10f062a04\") " pod="openstack-operators/mariadb-operator-index-l6dkm" Jan 29 16:23:56 crc kubenswrapper[4714]: I0129 16:23:56.790526 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r4sv\" (UniqueName: \"kubernetes.io/projected/d990dfb7-e078-4c7e-8e98-40b10f062a04-kube-api-access-8r4sv\") pod \"mariadb-operator-index-l6dkm\" (UID: \"d990dfb7-e078-4c7e-8e98-40b10f062a04\") " pod="openstack-operators/mariadb-operator-index-l6dkm" Jan 29 16:23:56 crc kubenswrapper[4714]: I0129 16:23:56.816682 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r4sv\" (UniqueName: \"kubernetes.io/projected/d990dfb7-e078-4c7e-8e98-40b10f062a04-kube-api-access-8r4sv\") pod \"mariadb-operator-index-l6dkm\" (UID: \"d990dfb7-e078-4c7e-8e98-40b10f062a04\") " pod="openstack-operators/mariadb-operator-index-l6dkm" Jan 29 16:23:56 crc kubenswrapper[4714]: I0129 16:23:56.856694 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-l6dkm" Jan 29 16:23:56 crc kubenswrapper[4714]: I0129 16:23:56.859437 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-jxfdk" podUID="c73614a5-aed3-4942-9382-2981e22773ec" containerName="registry-server" containerID="cri-o://e13c954e2ceec860ed151e1275323a23d1c2600cd51f3926408e9e8970fbca50" gracePeriod=2 Jan 29 16:23:57 crc kubenswrapper[4714]: I0129 16:23:57.270904 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-l6dkm"] Jan 29 16:23:57 crc kubenswrapper[4714]: W0129 16:23:57.277240 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd990dfb7_e078_4c7e_8e98_40b10f062a04.slice/crio-cd19708e51ad0ae38749b06c635286e93f2554a3fecdeb70a36c4d4f40376c94 WatchSource:0}: Error finding container cd19708e51ad0ae38749b06c635286e93f2554a3fecdeb70a36c4d4f40376c94: Status 404 returned error can't find the container with id cd19708e51ad0ae38749b06c635286e93f2554a3fecdeb70a36c4d4f40376c94 Jan 29 16:23:57 crc kubenswrapper[4714]: I0129 16:23:57.844693 4714 patch_prober.go:28] interesting pod/machine-config-daemon-ppngk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:23:57 crc kubenswrapper[4714]: I0129 16:23:57.844763 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:23:57 crc kubenswrapper[4714]: I0129 16:23:57.844829 4714 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" Jan 29 16:23:57 crc kubenswrapper[4714]: I0129 16:23:57.845434 4714 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"434181b332ad91829c9ca3b07c475cac7d3c8b013492e90ce07fd88776d24efa"} pod="openshift-machine-config-operator/machine-config-daemon-ppngk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:23:57 crc kubenswrapper[4714]: I0129 16:23:57.845510 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" containerID="cri-o://434181b332ad91829c9ca3b07c475cac7d3c8b013492e90ce07fd88776d24efa" gracePeriod=600 Jan 29 16:23:57 crc kubenswrapper[4714]: I0129 16:23:57.866564 4714 generic.go:334] "Generic (PLEG): container finished" podID="c73614a5-aed3-4942-9382-2981e22773ec" containerID="e13c954e2ceec860ed151e1275323a23d1c2600cd51f3926408e9e8970fbca50" exitCode=0 Jan 29 16:23:57 crc kubenswrapper[4714]: I0129 16:23:57.866656 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-jxfdk" event={"ID":"c73614a5-aed3-4942-9382-2981e22773ec","Type":"ContainerDied","Data":"e13c954e2ceec860ed151e1275323a23d1c2600cd51f3926408e9e8970fbca50"} Jan 29 16:23:57 crc kubenswrapper[4714]: I0129 16:23:57.868457 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-l6dkm" event={"ID":"d990dfb7-e078-4c7e-8e98-40b10f062a04","Type":"ContainerStarted","Data":"cd19708e51ad0ae38749b06c635286e93f2554a3fecdeb70a36c4d4f40376c94"} Jan 29 16:23:58 crc kubenswrapper[4714]: I0129 16:23:58.880120 4714 generic.go:334] "Generic (PLEG): container finished" podID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerID="434181b332ad91829c9ca3b07c475cac7d3c8b013492e90ce07fd88776d24efa" exitCode=0 Jan 29 16:23:58 crc kubenswrapper[4714]: I0129 16:23:58.880173 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" event={"ID":"c8c765f3-89eb-4077-8829-03e86eb0c90c","Type":"ContainerDied","Data":"434181b332ad91829c9ca3b07c475cac7d3c8b013492e90ce07fd88776d24efa"} Jan 29 16:23:58 crc kubenswrapper[4714]: I0129 16:23:58.880212 4714 scope.go:117] "RemoveContainer" containerID="aeda778ca6de188bfb9f09408c5d355e6f8d4366d5f9ebe7bfd9f2e4dea2a0e4" Jan 29 16:23:59 crc kubenswrapper[4714]: I0129 16:23:59.614372 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-jxfdk" Jan 29 16:23:59 crc kubenswrapper[4714]: I0129 16:23:59.728437 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtkk7\" (UniqueName: \"kubernetes.io/projected/c73614a5-aed3-4942-9382-2981e22773ec-kube-api-access-wtkk7\") pod \"c73614a5-aed3-4942-9382-2981e22773ec\" (UID: \"c73614a5-aed3-4942-9382-2981e22773ec\") " Jan 29 16:23:59 crc kubenswrapper[4714]: I0129 16:23:59.733684 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c73614a5-aed3-4942-9382-2981e22773ec-kube-api-access-wtkk7" (OuterVolumeSpecName: "kube-api-access-wtkk7") pod "c73614a5-aed3-4942-9382-2981e22773ec" (UID: "c73614a5-aed3-4942-9382-2981e22773ec"). InnerVolumeSpecName "kube-api-access-wtkk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:23:59 crc kubenswrapper[4714]: I0129 16:23:59.830204 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtkk7\" (UniqueName: \"kubernetes.io/projected/c73614a5-aed3-4942-9382-2981e22773ec-kube-api-access-wtkk7\") on node \"crc\" DevicePath \"\"" Jan 29 16:23:59 crc kubenswrapper[4714]: I0129 16:23:59.888114 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-jxfdk" event={"ID":"c73614a5-aed3-4942-9382-2981e22773ec","Type":"ContainerDied","Data":"590a2584ed40d70665967956dedd365e20ebaf918ed6a882e280c2a862bb3df9"} Jan 29 16:23:59 crc kubenswrapper[4714]: I0129 16:23:59.888162 4714 scope.go:117] "RemoveContainer" containerID="e13c954e2ceec860ed151e1275323a23d1c2600cd51f3926408e9e8970fbca50" Jan 29 16:23:59 crc kubenswrapper[4714]: I0129 16:23:59.888171 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-jxfdk" Jan 29 16:23:59 crc kubenswrapper[4714]: I0129 16:23:59.893294 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" event={"ID":"c8c765f3-89eb-4077-8829-03e86eb0c90c","Type":"ContainerStarted","Data":"77045db0ac9dbee23fe648e58207222e15e50d5178fcc5cc7a606b4bbe2af7ec"} Jan 29 16:23:59 crc kubenswrapper[4714]: I0129 16:23:59.897614 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-l6dkm" event={"ID":"d990dfb7-e078-4c7e-8e98-40b10f062a04","Type":"ContainerStarted","Data":"0020667ef371fcb5a3d00febffc3770f6cb20130544e17735a3ccff225db36b3"} Jan 29 16:23:59 crc kubenswrapper[4714]: I0129 16:23:59.932652 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-l6dkm" podStartSLOduration=1.621026461 podStartE2EDuration="3.932630194s" podCreationTimestamp="2026-01-29 16:23:56 +0000 UTC" firstStartedPulling="2026-01-29 16:23:57.28068193 +0000 UTC m=+843.801183050" lastFinishedPulling="2026-01-29 16:23:59.592285663 +0000 UTC m=+846.112786783" observedRunningTime="2026-01-29 16:23:59.924365537 +0000 UTC m=+846.444866677" watchObservedRunningTime="2026-01-29 16:23:59.932630194 +0000 UTC m=+846.453131314" Jan 29 16:23:59 crc kubenswrapper[4714]: I0129 16:23:59.937960 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-jxfdk"] Jan 29 16:23:59 crc kubenswrapper[4714]: I0129 16:23:59.941080 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-jxfdk"] Jan 29 16:24:00 crc kubenswrapper[4714]: I0129 16:24:00.190952 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c73614a5-aed3-4942-9382-2981e22773ec" path="/var/lib/kubelet/pods/c73614a5-aed3-4942-9382-2981e22773ec/volumes" Jan 29 16:24:06 crc kubenswrapper[4714]: I0129 16:24:06.857311 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-index-l6dkm" Jan 29 16:24:06 crc kubenswrapper[4714]: I0129 16:24:06.857753 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/mariadb-operator-index-l6dkm" Jan 29 16:24:06 crc kubenswrapper[4714]: I0129 16:24:06.886819 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/mariadb-operator-index-l6dkm" Jan 29 16:24:06 crc kubenswrapper[4714]: I0129 16:24:06.976573 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-index-l6dkm" Jan 29 16:24:08 crc kubenswrapper[4714]: I0129 16:24:08.553278 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp"] Jan 29 16:24:08 crc kubenswrapper[4714]: E0129 16:24:08.556615 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c73614a5-aed3-4942-9382-2981e22773ec" containerName="registry-server" Jan 29 16:24:08 crc kubenswrapper[4714]: I0129 16:24:08.556711 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="c73614a5-aed3-4942-9382-2981e22773ec" containerName="registry-server" Jan 29 16:24:08 crc kubenswrapper[4714]: I0129 16:24:08.557141 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="c73614a5-aed3-4942-9382-2981e22773ec" containerName="registry-server" Jan 29 16:24:08 crc kubenswrapper[4714]: I0129 16:24:08.558509 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp" Jan 29 16:24:08 crc kubenswrapper[4714]: I0129 16:24:08.560850 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp"] Jan 29 16:24:08 crc kubenswrapper[4714]: I0129 16:24:08.561652 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hwqbr" Jan 29 16:24:08 crc kubenswrapper[4714]: I0129 16:24:08.663567 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cm9m\" (UniqueName: \"kubernetes.io/projected/949d7185-7f54-44dd-9da9-3ed2c3c80e31-kube-api-access-4cm9m\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp\" (UID: \"949d7185-7f54-44dd-9da9-3ed2c3c80e31\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp" Jan 29 16:24:08 crc kubenswrapper[4714]: I0129 16:24:08.663678 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/949d7185-7f54-44dd-9da9-3ed2c3c80e31-util\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp\" (UID: \"949d7185-7f54-44dd-9da9-3ed2c3c80e31\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp" Jan 29 16:24:08 crc kubenswrapper[4714]: I0129 16:24:08.663745 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/949d7185-7f54-44dd-9da9-3ed2c3c80e31-bundle\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp\" (UID: \"949d7185-7f54-44dd-9da9-3ed2c3c80e31\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp" Jan 29 16:24:08 crc kubenswrapper[4714]: I0129 16:24:08.764957 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/949d7185-7f54-44dd-9da9-3ed2c3c80e31-bundle\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp\" (UID: \"949d7185-7f54-44dd-9da9-3ed2c3c80e31\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp" Jan 29 16:24:08 crc kubenswrapper[4714]: I0129 16:24:08.765012 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cm9m\" (UniqueName: \"kubernetes.io/projected/949d7185-7f54-44dd-9da9-3ed2c3c80e31-kube-api-access-4cm9m\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp\" (UID: \"949d7185-7f54-44dd-9da9-3ed2c3c80e31\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp" Jan 29 16:24:08 crc kubenswrapper[4714]: I0129 16:24:08.765060 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/949d7185-7f54-44dd-9da9-3ed2c3c80e31-util\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp\" (UID: \"949d7185-7f54-44dd-9da9-3ed2c3c80e31\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp" Jan 29 16:24:08 crc kubenswrapper[4714]: I0129 16:24:08.765429 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/949d7185-7f54-44dd-9da9-3ed2c3c80e31-util\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp\" (UID: \"949d7185-7f54-44dd-9da9-3ed2c3c80e31\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp" Jan 29 16:24:08 crc kubenswrapper[4714]: I0129 16:24:08.765723 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/949d7185-7f54-44dd-9da9-3ed2c3c80e31-bundle\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp\" (UID: \"949d7185-7f54-44dd-9da9-3ed2c3c80e31\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp" Jan 29 16:24:08 crc kubenswrapper[4714]: I0129 16:24:08.795067 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cm9m\" (UniqueName: \"kubernetes.io/projected/949d7185-7f54-44dd-9da9-3ed2c3c80e31-kube-api-access-4cm9m\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp\" (UID: \"949d7185-7f54-44dd-9da9-3ed2c3c80e31\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp" Jan 29 16:24:08 crc kubenswrapper[4714]: I0129 16:24:08.935876 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp" Jan 29 16:24:09 crc kubenswrapper[4714]: I0129 16:24:09.363135 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp"] Jan 29 16:24:09 crc kubenswrapper[4714]: I0129 16:24:09.969044 4714 generic.go:334] "Generic (PLEG): container finished" podID="949d7185-7f54-44dd-9da9-3ed2c3c80e31" containerID="bfc5719dfa5a492d30c2a6be943eb0655e0af9b3224bd4745d65e5929dc3407a" exitCode=0 Jan 29 16:24:09 crc kubenswrapper[4714]: I0129 16:24:09.969086 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp" event={"ID":"949d7185-7f54-44dd-9da9-3ed2c3c80e31","Type":"ContainerDied","Data":"bfc5719dfa5a492d30c2a6be943eb0655e0af9b3224bd4745d65e5929dc3407a"} Jan 29 16:24:09 crc kubenswrapper[4714]: I0129 16:24:09.969111 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp" event={"ID":"949d7185-7f54-44dd-9da9-3ed2c3c80e31","Type":"ContainerStarted","Data":"62108aeab22c92a3ca54960feac1e2ad547900c8771f3b5ba2b6aee8e4a745b6"} Jan 29 16:24:10 crc kubenswrapper[4714]: I0129 16:24:10.977790 4714 generic.go:334] "Generic (PLEG): container finished" podID="949d7185-7f54-44dd-9da9-3ed2c3c80e31" containerID="6ee4a0f3d055cfa18b2c55afd177524902b4dc64f544a61af9b1f46505e17336" exitCode=0 Jan 29 16:24:10 crc kubenswrapper[4714]: I0129 16:24:10.977917 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp" event={"ID":"949d7185-7f54-44dd-9da9-3ed2c3c80e31","Type":"ContainerDied","Data":"6ee4a0f3d055cfa18b2c55afd177524902b4dc64f544a61af9b1f46505e17336"} Jan 29 16:24:11 crc kubenswrapper[4714]: I0129 16:24:11.987420 4714 generic.go:334] "Generic (PLEG): container finished" podID="949d7185-7f54-44dd-9da9-3ed2c3c80e31" containerID="a2afc6d59d0b69e82adfd2f0ef885392ef80db9eed2c4295483123972e972c1a" exitCode=0 Jan 29 16:24:11 crc kubenswrapper[4714]: I0129 16:24:11.987475 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp" event={"ID":"949d7185-7f54-44dd-9da9-3ed2c3c80e31","Type":"ContainerDied","Data":"a2afc6d59d0b69e82adfd2f0ef885392ef80db9eed2c4295483123972e972c1a"} Jan 29 16:24:13 crc kubenswrapper[4714]: I0129 16:24:13.284159 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp" Jan 29 16:24:13 crc kubenswrapper[4714]: I0129 16:24:13.427384 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/949d7185-7f54-44dd-9da9-3ed2c3c80e31-bundle\") pod \"949d7185-7f54-44dd-9da9-3ed2c3c80e31\" (UID: \"949d7185-7f54-44dd-9da9-3ed2c3c80e31\") " Jan 29 16:24:13 crc kubenswrapper[4714]: I0129 16:24:13.427556 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/949d7185-7f54-44dd-9da9-3ed2c3c80e31-util\") pod \"949d7185-7f54-44dd-9da9-3ed2c3c80e31\" (UID: \"949d7185-7f54-44dd-9da9-3ed2c3c80e31\") " Jan 29 16:24:13 crc kubenswrapper[4714]: I0129 16:24:13.427649 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cm9m\" (UniqueName: \"kubernetes.io/projected/949d7185-7f54-44dd-9da9-3ed2c3c80e31-kube-api-access-4cm9m\") pod \"949d7185-7f54-44dd-9da9-3ed2c3c80e31\" (UID: \"949d7185-7f54-44dd-9da9-3ed2c3c80e31\") " Jan 29 16:24:13 crc kubenswrapper[4714]: I0129 16:24:13.429327 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/949d7185-7f54-44dd-9da9-3ed2c3c80e31-bundle" (OuterVolumeSpecName: "bundle") pod "949d7185-7f54-44dd-9da9-3ed2c3c80e31" (UID: "949d7185-7f54-44dd-9da9-3ed2c3c80e31"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:24:13 crc kubenswrapper[4714]: I0129 16:24:13.433164 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/949d7185-7f54-44dd-9da9-3ed2c3c80e31-kube-api-access-4cm9m" (OuterVolumeSpecName: "kube-api-access-4cm9m") pod "949d7185-7f54-44dd-9da9-3ed2c3c80e31" (UID: "949d7185-7f54-44dd-9da9-3ed2c3c80e31"). InnerVolumeSpecName "kube-api-access-4cm9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:24:13 crc kubenswrapper[4714]: I0129 16:24:13.441899 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/949d7185-7f54-44dd-9da9-3ed2c3c80e31-util" (OuterVolumeSpecName: "util") pod "949d7185-7f54-44dd-9da9-3ed2c3c80e31" (UID: "949d7185-7f54-44dd-9da9-3ed2c3c80e31"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:24:13 crc kubenswrapper[4714]: I0129 16:24:13.529061 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cm9m\" (UniqueName: \"kubernetes.io/projected/949d7185-7f54-44dd-9da9-3ed2c3c80e31-kube-api-access-4cm9m\") on node \"crc\" DevicePath \"\"" Jan 29 16:24:13 crc kubenswrapper[4714]: I0129 16:24:13.529093 4714 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/949d7185-7f54-44dd-9da9-3ed2c3c80e31-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:24:13 crc kubenswrapper[4714]: I0129 16:24:13.529101 4714 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/949d7185-7f54-44dd-9da9-3ed2c3c80e31-util\") on node \"crc\" DevicePath \"\"" Jan 29 16:24:14 crc kubenswrapper[4714]: I0129 16:24:14.006333 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp" event={"ID":"949d7185-7f54-44dd-9da9-3ed2c3c80e31","Type":"ContainerDied","Data":"62108aeab22c92a3ca54960feac1e2ad547900c8771f3b5ba2b6aee8e4a745b6"} Jan 29 16:24:14 crc kubenswrapper[4714]: I0129 16:24:14.006788 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62108aeab22c92a3ca54960feac1e2ad547900c8771f3b5ba2b6aee8e4a745b6" Jan 29 16:24:14 crc kubenswrapper[4714]: I0129 16:24:14.006383 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp" Jan 29 16:24:21 crc kubenswrapper[4714]: I0129 16:24:21.689275 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7cc56799bb-ddchn"] Jan 29 16:24:21 crc kubenswrapper[4714]: E0129 16:24:21.689880 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949d7185-7f54-44dd-9da9-3ed2c3c80e31" containerName="util" Jan 29 16:24:21 crc kubenswrapper[4714]: I0129 16:24:21.689896 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="949d7185-7f54-44dd-9da9-3ed2c3c80e31" containerName="util" Jan 29 16:24:21 crc kubenswrapper[4714]: E0129 16:24:21.689908 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949d7185-7f54-44dd-9da9-3ed2c3c80e31" containerName="extract" Jan 29 16:24:21 crc kubenswrapper[4714]: I0129 16:24:21.689915 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="949d7185-7f54-44dd-9da9-3ed2c3c80e31" containerName="extract" Jan 29 16:24:21 crc kubenswrapper[4714]: E0129 16:24:21.689951 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949d7185-7f54-44dd-9da9-3ed2c3c80e31" containerName="pull" Jan 29 16:24:21 crc kubenswrapper[4714]: I0129 16:24:21.689961 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="949d7185-7f54-44dd-9da9-3ed2c3c80e31" containerName="pull" Jan 29 16:24:21 crc kubenswrapper[4714]: I0129 16:24:21.690087 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="949d7185-7f54-44dd-9da9-3ed2c3c80e31" containerName="extract" Jan 29 16:24:21 crc kubenswrapper[4714]: I0129 16:24:21.690544 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7cc56799bb-ddchn" Jan 29 16:24:21 crc kubenswrapper[4714]: I0129 16:24:21.693092 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-c568f" Jan 29 16:24:21 crc kubenswrapper[4714]: I0129 16:24:21.693098 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 29 16:24:21 crc kubenswrapper[4714]: I0129 16:24:21.698098 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-service-cert" Jan 29 16:24:21 crc kubenswrapper[4714]: I0129 16:24:21.707445 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7cc56799bb-ddchn"] Jan 29 16:24:21 crc kubenswrapper[4714]: I0129 16:24:21.842292 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8771e447-1cf7-43f9-bfab-6c1afd7476dc-webhook-cert\") pod \"mariadb-operator-controller-manager-7cc56799bb-ddchn\" (UID: \"8771e447-1cf7-43f9-bfab-6c1afd7476dc\") " pod="openstack-operators/mariadb-operator-controller-manager-7cc56799bb-ddchn" Jan 29 16:24:21 crc kubenswrapper[4714]: I0129 16:24:21.842853 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx2m7\" (UniqueName: \"kubernetes.io/projected/8771e447-1cf7-43f9-bfab-6c1afd7476dc-kube-api-access-mx2m7\") pod \"mariadb-operator-controller-manager-7cc56799bb-ddchn\" (UID: \"8771e447-1cf7-43f9-bfab-6c1afd7476dc\") " pod="openstack-operators/mariadb-operator-controller-manager-7cc56799bb-ddchn" Jan 29 16:24:21 crc kubenswrapper[4714]: I0129 16:24:21.842926 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8771e447-1cf7-43f9-bfab-6c1afd7476dc-apiservice-cert\") pod \"mariadb-operator-controller-manager-7cc56799bb-ddchn\" (UID: \"8771e447-1cf7-43f9-bfab-6c1afd7476dc\") " pod="openstack-operators/mariadb-operator-controller-manager-7cc56799bb-ddchn" Jan 29 16:24:21 crc kubenswrapper[4714]: I0129 16:24:21.944064 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8771e447-1cf7-43f9-bfab-6c1afd7476dc-webhook-cert\") pod \"mariadb-operator-controller-manager-7cc56799bb-ddchn\" (UID: \"8771e447-1cf7-43f9-bfab-6c1afd7476dc\") " pod="openstack-operators/mariadb-operator-controller-manager-7cc56799bb-ddchn" Jan 29 16:24:21 crc kubenswrapper[4714]: I0129 16:24:21.944518 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx2m7\" (UniqueName: \"kubernetes.io/projected/8771e447-1cf7-43f9-bfab-6c1afd7476dc-kube-api-access-mx2m7\") pod \"mariadb-operator-controller-manager-7cc56799bb-ddchn\" (UID: \"8771e447-1cf7-43f9-bfab-6c1afd7476dc\") " pod="openstack-operators/mariadb-operator-controller-manager-7cc56799bb-ddchn" Jan 29 16:24:21 crc kubenswrapper[4714]: I0129 16:24:21.944746 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8771e447-1cf7-43f9-bfab-6c1afd7476dc-apiservice-cert\") pod \"mariadb-operator-controller-manager-7cc56799bb-ddchn\" (UID: \"8771e447-1cf7-43f9-bfab-6c1afd7476dc\") " pod="openstack-operators/mariadb-operator-controller-manager-7cc56799bb-ddchn" Jan 29 16:24:21 crc kubenswrapper[4714]: I0129 16:24:21.953193 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8771e447-1cf7-43f9-bfab-6c1afd7476dc-apiservice-cert\") pod \"mariadb-operator-controller-manager-7cc56799bb-ddchn\" (UID: \"8771e447-1cf7-43f9-bfab-6c1afd7476dc\") " pod="openstack-operators/mariadb-operator-controller-manager-7cc56799bb-ddchn" Jan 29 16:24:21 crc kubenswrapper[4714]: I0129 16:24:21.955462 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8771e447-1cf7-43f9-bfab-6c1afd7476dc-webhook-cert\") pod \"mariadb-operator-controller-manager-7cc56799bb-ddchn\" (UID: \"8771e447-1cf7-43f9-bfab-6c1afd7476dc\") " pod="openstack-operators/mariadb-operator-controller-manager-7cc56799bb-ddchn" Jan 29 16:24:21 crc kubenswrapper[4714]: I0129 16:24:21.962333 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx2m7\" (UniqueName: \"kubernetes.io/projected/8771e447-1cf7-43f9-bfab-6c1afd7476dc-kube-api-access-mx2m7\") pod \"mariadb-operator-controller-manager-7cc56799bb-ddchn\" (UID: \"8771e447-1cf7-43f9-bfab-6c1afd7476dc\") " pod="openstack-operators/mariadb-operator-controller-manager-7cc56799bb-ddchn" Jan 29 16:24:22 crc kubenswrapper[4714]: I0129 16:24:22.009418 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7cc56799bb-ddchn" Jan 29 16:24:22 crc kubenswrapper[4714]: I0129 16:24:22.275812 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7cc56799bb-ddchn"] Jan 29 16:24:22 crc kubenswrapper[4714]: W0129 16:24:22.281932 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8771e447_1cf7_43f9_bfab_6c1afd7476dc.slice/crio-4ead9770f8d819bd3b2a9d514caf8dd18e92463bd9f126ba95d1cbb0e58fb71f WatchSource:0}: Error finding container 4ead9770f8d819bd3b2a9d514caf8dd18e92463bd9f126ba95d1cbb0e58fb71f: Status 404 returned error can't find the container with id 4ead9770f8d819bd3b2a9d514caf8dd18e92463bd9f126ba95d1cbb0e58fb71f Jan 29 16:24:23 crc kubenswrapper[4714]: I0129 16:24:23.062374 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7cc56799bb-ddchn" event={"ID":"8771e447-1cf7-43f9-bfab-6c1afd7476dc","Type":"ContainerStarted","Data":"4ead9770f8d819bd3b2a9d514caf8dd18e92463bd9f126ba95d1cbb0e58fb71f"} Jan 29 16:24:26 crc kubenswrapper[4714]: I0129 16:24:26.084107 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7cc56799bb-ddchn" event={"ID":"8771e447-1cf7-43f9-bfab-6c1afd7476dc","Type":"ContainerStarted","Data":"265cc03bd32fdd618d4ab75713d4df2f554d8e322232952f23e67ae7895f0208"} Jan 29 16:24:26 crc kubenswrapper[4714]: I0129 16:24:26.084696 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7cc56799bb-ddchn" Jan 29 16:24:26 crc kubenswrapper[4714]: I0129 16:24:26.102950 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7cc56799bb-ddchn" podStartSLOduration=1.861245273 podStartE2EDuration="5.102921372s" podCreationTimestamp="2026-01-29 16:24:21 +0000 UTC" firstStartedPulling="2026-01-29 16:24:22.284660459 +0000 UTC m=+868.805161589" lastFinishedPulling="2026-01-29 16:24:25.526336578 +0000 UTC m=+872.046837688" observedRunningTime="2026-01-29 16:24:26.101012202 +0000 UTC m=+872.621513332" watchObservedRunningTime="2026-01-29 16:24:26.102921372 +0000 UTC m=+872.623422492" Jan 29 16:24:32 crc kubenswrapper[4714]: I0129 16:24:32.015388 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7cc56799bb-ddchn" Jan 29 16:24:38 crc kubenswrapper[4714]: I0129 16:24:38.449955 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-44nbs"] Jan 29 16:24:38 crc kubenswrapper[4714]: I0129 16:24:38.451402 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-44nbs" Jan 29 16:24:38 crc kubenswrapper[4714]: I0129 16:24:38.454444 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-index-dockercfg-5fnhf" Jan 29 16:24:38 crc kubenswrapper[4714]: I0129 16:24:38.457989 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-44nbs"] Jan 29 16:24:38 crc kubenswrapper[4714]: I0129 16:24:38.486854 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhzpl\" (UniqueName: \"kubernetes.io/projected/6e7fe80f-e880-4c2d-8f2e-1861fb1575b7-kube-api-access-zhzpl\") pod \"infra-operator-index-44nbs\" (UID: \"6e7fe80f-e880-4c2d-8f2e-1861fb1575b7\") " pod="openstack-operators/infra-operator-index-44nbs" Jan 29 16:24:38 crc kubenswrapper[4714]: I0129 16:24:38.588027 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhzpl\" (UniqueName: \"kubernetes.io/projected/6e7fe80f-e880-4c2d-8f2e-1861fb1575b7-kube-api-access-zhzpl\") pod \"infra-operator-index-44nbs\" (UID: \"6e7fe80f-e880-4c2d-8f2e-1861fb1575b7\") " pod="openstack-operators/infra-operator-index-44nbs" Jan 29 16:24:38 crc kubenswrapper[4714]: I0129 16:24:38.626895 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhzpl\" (UniqueName: \"kubernetes.io/projected/6e7fe80f-e880-4c2d-8f2e-1861fb1575b7-kube-api-access-zhzpl\") pod \"infra-operator-index-44nbs\" (UID: \"6e7fe80f-e880-4c2d-8f2e-1861fb1575b7\") " pod="openstack-operators/infra-operator-index-44nbs" Jan 29 16:24:38 crc kubenswrapper[4714]: I0129 16:24:38.771247 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-44nbs" Jan 29 16:24:40 crc kubenswrapper[4714]: I0129 16:24:40.025258 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-44nbs"] Jan 29 16:24:40 crc kubenswrapper[4714]: I0129 16:24:40.166498 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-44nbs" event={"ID":"6e7fe80f-e880-4c2d-8f2e-1861fb1575b7","Type":"ContainerStarted","Data":"2b032bfed2b84d9c544866a2e2b9f0d7688dc192688bc1e69bbc65eeaf0fbe29"} Jan 29 16:24:42 crc kubenswrapper[4714]: I0129 16:24:42.196611 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-44nbs" event={"ID":"6e7fe80f-e880-4c2d-8f2e-1861fb1575b7","Type":"ContainerStarted","Data":"3da4379d19ac2f9c90609a5bbb56d1adb9a0376fa03d524802535beb518bf498"} Jan 29 16:24:42 crc kubenswrapper[4714]: I0129 16:24:42.214147 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-44nbs" podStartSLOduration=2.906302288 podStartE2EDuration="4.214116376s" podCreationTimestamp="2026-01-29 16:24:38 +0000 UTC" firstStartedPulling="2026-01-29 16:24:40.043260627 +0000 UTC m=+886.563761747" lastFinishedPulling="2026-01-29 16:24:41.351074695 +0000 UTC m=+887.871575835" observedRunningTime="2026-01-29 16:24:42.208181851 +0000 UTC m=+888.728683001" watchObservedRunningTime="2026-01-29 16:24:42.214116376 +0000 UTC m=+888.734617536" Jan 29 16:24:42 crc kubenswrapper[4714]: I0129 16:24:42.442773 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-44nbs"] Jan 29 16:24:43 crc kubenswrapper[4714]: I0129 16:24:43.052650 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-9xq82"] Jan 29 16:24:43 crc kubenswrapper[4714]: I0129 16:24:43.054376 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-9xq82" Jan 29 16:24:43 crc kubenswrapper[4714]: I0129 16:24:43.057858 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-9xq82"] Jan 29 16:24:43 crc kubenswrapper[4714]: I0129 16:24:43.159498 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts4kr\" (UniqueName: \"kubernetes.io/projected/9dcd8561-aa17-46a8-b184-0495c320a33b-kube-api-access-ts4kr\") pod \"infra-operator-index-9xq82\" (UID: \"9dcd8561-aa17-46a8-b184-0495c320a33b\") " pod="openstack-operators/infra-operator-index-9xq82" Jan 29 16:24:43 crc kubenswrapper[4714]: I0129 16:24:43.260873 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts4kr\" (UniqueName: \"kubernetes.io/projected/9dcd8561-aa17-46a8-b184-0495c320a33b-kube-api-access-ts4kr\") pod \"infra-operator-index-9xq82\" (UID: \"9dcd8561-aa17-46a8-b184-0495c320a33b\") " pod="openstack-operators/infra-operator-index-9xq82" Jan 29 16:24:43 crc kubenswrapper[4714]: I0129 16:24:43.283770 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts4kr\" (UniqueName: \"kubernetes.io/projected/9dcd8561-aa17-46a8-b184-0495c320a33b-kube-api-access-ts4kr\") pod \"infra-operator-index-9xq82\" (UID: \"9dcd8561-aa17-46a8-b184-0495c320a33b\") " pod="openstack-operators/infra-operator-index-9xq82" Jan 29 16:24:43 crc kubenswrapper[4714]: I0129 16:24:43.375753 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-9xq82" Jan 29 16:24:43 crc kubenswrapper[4714]: I0129 16:24:43.564804 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-9xq82"] Jan 29 16:24:44 crc kubenswrapper[4714]: I0129 16:24:44.199637 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-9xq82" event={"ID":"9dcd8561-aa17-46a8-b184-0495c320a33b","Type":"ContainerStarted","Data":"17fa7e5da52af363967b26a09672574689ebcabaf095f6c1443af9a04ea81254"} Jan 29 16:24:44 crc kubenswrapper[4714]: I0129 16:24:44.199704 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-9xq82" event={"ID":"9dcd8561-aa17-46a8-b184-0495c320a33b","Type":"ContainerStarted","Data":"b25ca92cca2c102702aabf44744c607b11ac82433ac2ea8b9c60134c6952d1ea"} Jan 29 16:24:44 crc kubenswrapper[4714]: I0129 16:24:44.199758 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-index-44nbs" podUID="6e7fe80f-e880-4c2d-8f2e-1861fb1575b7" containerName="registry-server" containerID="cri-o://3da4379d19ac2f9c90609a5bbb56d1adb9a0376fa03d524802535beb518bf498" gracePeriod=2 Jan 29 16:24:44 crc kubenswrapper[4714]: I0129 16:24:44.229627 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-9xq82" podStartSLOduration=0.786167259 podStartE2EDuration="1.229601073s" podCreationTimestamp="2026-01-29 16:24:43 +0000 UTC" firstStartedPulling="2026-01-29 16:24:43.574020773 +0000 UTC m=+890.094521893" lastFinishedPulling="2026-01-29 16:24:44.017454587 +0000 UTC m=+890.537955707" observedRunningTime="2026-01-29 16:24:44.226907569 +0000 UTC m=+890.747408709" watchObservedRunningTime="2026-01-29 16:24:44.229601073 +0000 UTC m=+890.750102193" Jan 29 16:24:44 crc kubenswrapper[4714]: I0129 16:24:44.520404 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-44nbs" Jan 29 16:24:44 crc kubenswrapper[4714]: I0129 16:24:44.592323 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhzpl\" (UniqueName: \"kubernetes.io/projected/6e7fe80f-e880-4c2d-8f2e-1861fb1575b7-kube-api-access-zhzpl\") pod \"6e7fe80f-e880-4c2d-8f2e-1861fb1575b7\" (UID: \"6e7fe80f-e880-4c2d-8f2e-1861fb1575b7\") " Jan 29 16:24:44 crc kubenswrapper[4714]: I0129 16:24:44.599237 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e7fe80f-e880-4c2d-8f2e-1861fb1575b7-kube-api-access-zhzpl" (OuterVolumeSpecName: "kube-api-access-zhzpl") pod "6e7fe80f-e880-4c2d-8f2e-1861fb1575b7" (UID: "6e7fe80f-e880-4c2d-8f2e-1861fb1575b7"). InnerVolumeSpecName "kube-api-access-zhzpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:24:44 crc kubenswrapper[4714]: I0129 16:24:44.693702 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhzpl\" (UniqueName: \"kubernetes.io/projected/6e7fe80f-e880-4c2d-8f2e-1861fb1575b7-kube-api-access-zhzpl\") on node \"crc\" DevicePath \"\"" Jan 29 16:24:45 crc kubenswrapper[4714]: I0129 16:24:45.207723 4714 generic.go:334] "Generic (PLEG): container finished" podID="6e7fe80f-e880-4c2d-8f2e-1861fb1575b7" containerID="3da4379d19ac2f9c90609a5bbb56d1adb9a0376fa03d524802535beb518bf498" exitCode=0 Jan 29 16:24:45 crc kubenswrapper[4714]: I0129 16:24:45.207791 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-44nbs" Jan 29 16:24:45 crc kubenswrapper[4714]: I0129 16:24:45.207794 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-44nbs" event={"ID":"6e7fe80f-e880-4c2d-8f2e-1861fb1575b7","Type":"ContainerDied","Data":"3da4379d19ac2f9c90609a5bbb56d1adb9a0376fa03d524802535beb518bf498"} Jan 29 16:24:45 crc kubenswrapper[4714]: I0129 16:24:45.207890 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-44nbs" event={"ID":"6e7fe80f-e880-4c2d-8f2e-1861fb1575b7","Type":"ContainerDied","Data":"2b032bfed2b84d9c544866a2e2b9f0d7688dc192688bc1e69bbc65eeaf0fbe29"} Jan 29 16:24:45 crc kubenswrapper[4714]: I0129 16:24:45.207965 4714 scope.go:117] "RemoveContainer" containerID="3da4379d19ac2f9c90609a5bbb56d1adb9a0376fa03d524802535beb518bf498" Jan 29 16:24:45 crc kubenswrapper[4714]: I0129 16:24:45.232741 4714 scope.go:117] "RemoveContainer" containerID="3da4379d19ac2f9c90609a5bbb56d1adb9a0376fa03d524802535beb518bf498" Jan 29 16:24:45 crc kubenswrapper[4714]: E0129 16:24:45.233954 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3da4379d19ac2f9c90609a5bbb56d1adb9a0376fa03d524802535beb518bf498\": container with ID starting with 3da4379d19ac2f9c90609a5bbb56d1adb9a0376fa03d524802535beb518bf498 not found: ID does not exist" containerID="3da4379d19ac2f9c90609a5bbb56d1adb9a0376fa03d524802535beb518bf498" Jan 29 16:24:45 crc kubenswrapper[4714]: I0129 16:24:45.234000 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3da4379d19ac2f9c90609a5bbb56d1adb9a0376fa03d524802535beb518bf498"} err="failed to get container status \"3da4379d19ac2f9c90609a5bbb56d1adb9a0376fa03d524802535beb518bf498\": rpc error: code = NotFound desc = could not find container \"3da4379d19ac2f9c90609a5bbb56d1adb9a0376fa03d524802535beb518bf498\": container with ID starting with 3da4379d19ac2f9c90609a5bbb56d1adb9a0376fa03d524802535beb518bf498 not found: ID does not exist" Jan 29 16:24:45 crc kubenswrapper[4714]: I0129 16:24:45.256544 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-44nbs"] Jan 29 16:24:45 crc kubenswrapper[4714]: I0129 16:24:45.261639 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-index-44nbs"] Jan 29 16:24:46 crc kubenswrapper[4714]: I0129 16:24:46.194505 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e7fe80f-e880-4c2d-8f2e-1861fb1575b7" path="/var/lib/kubelet/pods/6e7fe80f-e880-4c2d-8f2e-1861fb1575b7/volumes" Jan 29 16:24:53 crc kubenswrapper[4714]: I0129 16:24:53.376725 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-index-9xq82" Jan 29 16:24:53 crc kubenswrapper[4714]: I0129 16:24:53.377604 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/infra-operator-index-9xq82" Jan 29 16:24:53 crc kubenswrapper[4714]: I0129 16:24:53.411626 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/infra-operator-index-9xq82" Jan 29 16:24:54 crc kubenswrapper[4714]: I0129 16:24:54.299068 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-index-9xq82" Jan 29 16:24:56 crc kubenswrapper[4714]: I0129 16:24:56.301356 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs"] Jan 29 16:24:56 crc kubenswrapper[4714]: E0129 16:24:56.301996 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7fe80f-e880-4c2d-8f2e-1861fb1575b7" containerName="registry-server" Jan 29 16:24:56 crc kubenswrapper[4714]: I0129 16:24:56.302012 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7fe80f-e880-4c2d-8f2e-1861fb1575b7" containerName="registry-server" Jan 29 16:24:56 crc kubenswrapper[4714]: I0129 16:24:56.302139 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e7fe80f-e880-4c2d-8f2e-1861fb1575b7" containerName="registry-server" Jan 29 16:24:56 crc kubenswrapper[4714]: I0129 16:24:56.303042 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs" Jan 29 16:24:56 crc kubenswrapper[4714]: I0129 16:24:56.306045 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hwqbr" Jan 29 16:24:56 crc kubenswrapper[4714]: I0129 16:24:56.309329 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs"] Jan 29 16:24:56 crc kubenswrapper[4714]: I0129 16:24:56.452870 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/096bd691-cca6-4566-b56c-7643e2feaef1-bundle\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs\" (UID: \"096bd691-cca6-4566-b56c-7643e2feaef1\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs" Jan 29 16:24:56 crc kubenswrapper[4714]: I0129 16:24:56.453141 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pplc7\" (UniqueName: \"kubernetes.io/projected/096bd691-cca6-4566-b56c-7643e2feaef1-kube-api-access-pplc7\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs\" (UID: \"096bd691-cca6-4566-b56c-7643e2feaef1\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs" Jan 29 16:24:56 crc kubenswrapper[4714]: I0129 16:24:56.453222 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/096bd691-cca6-4566-b56c-7643e2feaef1-util\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs\" (UID: \"096bd691-cca6-4566-b56c-7643e2feaef1\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs" Jan 29 16:24:56 crc kubenswrapper[4714]: I0129 16:24:56.554619 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/096bd691-cca6-4566-b56c-7643e2feaef1-bundle\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs\" (UID: \"096bd691-cca6-4566-b56c-7643e2feaef1\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs" Jan 29 16:24:56 crc kubenswrapper[4714]: I0129 16:24:56.554810 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pplc7\" (UniqueName: \"kubernetes.io/projected/096bd691-cca6-4566-b56c-7643e2feaef1-kube-api-access-pplc7\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs\" (UID: \"096bd691-cca6-4566-b56c-7643e2feaef1\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs" Jan 29 16:24:56 crc kubenswrapper[4714]: I0129 16:24:56.554880 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/096bd691-cca6-4566-b56c-7643e2feaef1-util\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs\" (UID: \"096bd691-cca6-4566-b56c-7643e2feaef1\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs" Jan 29 16:24:56 crc kubenswrapper[4714]: I0129 16:24:56.555133 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/096bd691-cca6-4566-b56c-7643e2feaef1-bundle\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs\" (UID: \"096bd691-cca6-4566-b56c-7643e2feaef1\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs" Jan 29 16:24:56 crc kubenswrapper[4714]: I0129 16:24:56.555161 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/096bd691-cca6-4566-b56c-7643e2feaef1-util\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs\" (UID: \"096bd691-cca6-4566-b56c-7643e2feaef1\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs" Jan 29 16:24:56 crc kubenswrapper[4714]: I0129 16:24:56.573981 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pplc7\" (UniqueName: \"kubernetes.io/projected/096bd691-cca6-4566-b56c-7643e2feaef1-kube-api-access-pplc7\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs\" (UID: \"096bd691-cca6-4566-b56c-7643e2feaef1\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs" Jan 29 16:24:56 crc kubenswrapper[4714]: I0129 16:24:56.621778 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs" Jan 29 16:24:57 crc kubenswrapper[4714]: I0129 16:24:57.054783 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs"] Jan 29 16:24:57 crc kubenswrapper[4714]: I0129 16:24:57.283081 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs" event={"ID":"096bd691-cca6-4566-b56c-7643e2feaef1","Type":"ContainerStarted","Data":"1cd564e322fa134ca0b1e14f7b1c05c0de31a1e7c8f443cd6b64bbf340b9a6ae"} Jan 29 16:24:57 crc kubenswrapper[4714]: I0129 16:24:57.283130 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs" event={"ID":"096bd691-cca6-4566-b56c-7643e2feaef1","Type":"ContainerStarted","Data":"9498d20df0cd78103e7153f16a6de5efe6b72b7790741a99a88e73ae8cbf811d"} Jan 29 16:24:58 crc kubenswrapper[4714]: I0129 16:24:58.291654 4714 generic.go:334] "Generic (PLEG): container finished" podID="096bd691-cca6-4566-b56c-7643e2feaef1" containerID="1cd564e322fa134ca0b1e14f7b1c05c0de31a1e7c8f443cd6b64bbf340b9a6ae" exitCode=0 Jan 29 16:24:58 crc kubenswrapper[4714]: I0129 16:24:58.291711 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs" event={"ID":"096bd691-cca6-4566-b56c-7643e2feaef1","Type":"ContainerDied","Data":"1cd564e322fa134ca0b1e14f7b1c05c0de31a1e7c8f443cd6b64bbf340b9a6ae"} Jan 29 16:25:00 crc kubenswrapper[4714]: I0129 16:25:00.312596 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs" event={"ID":"096bd691-cca6-4566-b56c-7643e2feaef1","Type":"ContainerStarted","Data":"8be88780f2ccaa67529f7c97f45a315a79167378ebf7e1fdcde84818ec246373"} Jan 29 16:25:01 crc kubenswrapper[4714]: I0129 16:25:01.322047 4714 generic.go:334] "Generic (PLEG): container finished" podID="096bd691-cca6-4566-b56c-7643e2feaef1" containerID="8be88780f2ccaa67529f7c97f45a315a79167378ebf7e1fdcde84818ec246373" exitCode=0 Jan 29 16:25:01 crc kubenswrapper[4714]: I0129 16:25:01.322099 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs" event={"ID":"096bd691-cca6-4566-b56c-7643e2feaef1","Type":"ContainerDied","Data":"8be88780f2ccaa67529f7c97f45a315a79167378ebf7e1fdcde84818ec246373"} Jan 29 16:25:02 crc kubenswrapper[4714]: I0129 16:25:02.333544 4714 generic.go:334] "Generic (PLEG): container finished" podID="096bd691-cca6-4566-b56c-7643e2feaef1" containerID="3bc2a2089f6be296701e5d57c79cbb2a9a2dd560e4db7f4ea6460bad3386ed41" exitCode=0 Jan 29 16:25:02 crc kubenswrapper[4714]: I0129 16:25:02.333666 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs" event={"ID":"096bd691-cca6-4566-b56c-7643e2feaef1","Type":"ContainerDied","Data":"3bc2a2089f6be296701e5d57c79cbb2a9a2dd560e4db7f4ea6460bad3386ed41"} Jan 29 16:25:03 crc kubenswrapper[4714]: I0129 16:25:03.588520 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs" Jan 29 16:25:03 crc kubenswrapper[4714]: I0129 16:25:03.754642 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/096bd691-cca6-4566-b56c-7643e2feaef1-util\") pod \"096bd691-cca6-4566-b56c-7643e2feaef1\" (UID: \"096bd691-cca6-4566-b56c-7643e2feaef1\") " Jan 29 16:25:03 crc kubenswrapper[4714]: I0129 16:25:03.754745 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pplc7\" (UniqueName: \"kubernetes.io/projected/096bd691-cca6-4566-b56c-7643e2feaef1-kube-api-access-pplc7\") pod \"096bd691-cca6-4566-b56c-7643e2feaef1\" (UID: \"096bd691-cca6-4566-b56c-7643e2feaef1\") " Jan 29 16:25:03 crc kubenswrapper[4714]: I0129 16:25:03.754783 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/096bd691-cca6-4566-b56c-7643e2feaef1-bundle\") pod \"096bd691-cca6-4566-b56c-7643e2feaef1\" (UID: \"096bd691-cca6-4566-b56c-7643e2feaef1\") " Jan 29 16:25:03 crc kubenswrapper[4714]: I0129 16:25:03.757272 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/096bd691-cca6-4566-b56c-7643e2feaef1-bundle" (OuterVolumeSpecName: "bundle") pod "096bd691-cca6-4566-b56c-7643e2feaef1" (UID: "096bd691-cca6-4566-b56c-7643e2feaef1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:25:03 crc kubenswrapper[4714]: I0129 16:25:03.765134 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/096bd691-cca6-4566-b56c-7643e2feaef1-kube-api-access-pplc7" (OuterVolumeSpecName: "kube-api-access-pplc7") pod "096bd691-cca6-4566-b56c-7643e2feaef1" (UID: "096bd691-cca6-4566-b56c-7643e2feaef1"). InnerVolumeSpecName "kube-api-access-pplc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:25:03 crc kubenswrapper[4714]: I0129 16:25:03.765755 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/096bd691-cca6-4566-b56c-7643e2feaef1-util" (OuterVolumeSpecName: "util") pod "096bd691-cca6-4566-b56c-7643e2feaef1" (UID: "096bd691-cca6-4566-b56c-7643e2feaef1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:25:03 crc kubenswrapper[4714]: I0129 16:25:03.855382 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pplc7\" (UniqueName: \"kubernetes.io/projected/096bd691-cca6-4566-b56c-7643e2feaef1-kube-api-access-pplc7\") on node \"crc\" DevicePath \"\"" Jan 29 16:25:03 crc kubenswrapper[4714]: I0129 16:25:03.855424 4714 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/096bd691-cca6-4566-b56c-7643e2feaef1-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:25:03 crc kubenswrapper[4714]: I0129 16:25:03.855432 4714 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/096bd691-cca6-4566-b56c-7643e2feaef1-util\") on node \"crc\" DevicePath \"\"" Jan 29 16:25:04 crc kubenswrapper[4714]: I0129 16:25:04.348133 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs" event={"ID":"096bd691-cca6-4566-b56c-7643e2feaef1","Type":"ContainerDied","Data":"9498d20df0cd78103e7153f16a6de5efe6b72b7790741a99a88e73ae8cbf811d"} Jan 29 16:25:04 crc kubenswrapper[4714]: I0129 16:25:04.348206 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9498d20df0cd78103e7153f16a6de5efe6b72b7790741a99a88e73ae8cbf811d" Jan 29 16:25:04 crc kubenswrapper[4714]: I0129 16:25:04.348230 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs" Jan 29 16:25:13 crc kubenswrapper[4714]: I0129 16:25:13.529643 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-66f4f5476c-xqnxq"] Jan 29 16:25:13 crc kubenswrapper[4714]: E0129 16:25:13.530228 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="096bd691-cca6-4566-b56c-7643e2feaef1" containerName="pull" Jan 29 16:25:13 crc kubenswrapper[4714]: I0129 16:25:13.530265 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="096bd691-cca6-4566-b56c-7643e2feaef1" containerName="pull" Jan 29 16:25:13 crc kubenswrapper[4714]: E0129 16:25:13.530282 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="096bd691-cca6-4566-b56c-7643e2feaef1" containerName="extract" Jan 29 16:25:13 crc kubenswrapper[4714]: I0129 16:25:13.530290 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="096bd691-cca6-4566-b56c-7643e2feaef1" containerName="extract" Jan 29 16:25:13 crc kubenswrapper[4714]: E0129 16:25:13.530309 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="096bd691-cca6-4566-b56c-7643e2feaef1" containerName="util" Jan 29 16:25:13 crc kubenswrapper[4714]: I0129 16:25:13.530317 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="096bd691-cca6-4566-b56c-7643e2feaef1" containerName="util" Jan 29 16:25:13 crc kubenswrapper[4714]: I0129 16:25:13.530474 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="096bd691-cca6-4566-b56c-7643e2feaef1" containerName="extract" Jan 29 16:25:13 crc kubenswrapper[4714]: I0129 16:25:13.530960 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-66f4f5476c-xqnxq" Jan 29 16:25:13 crc kubenswrapper[4714]: I0129 16:25:13.532906 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-5tqg8" Jan 29 16:25:13 crc kubenswrapper[4714]: I0129 16:25:13.533301 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-service-cert" Jan 29 16:25:13 crc kubenswrapper[4714]: I0129 16:25:13.543835 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-66f4f5476c-xqnxq"] Jan 29 16:25:13 crc kubenswrapper[4714]: I0129 16:25:13.689148 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2779e724-225f-4a5f-9e2c-3b05fe08dff2-apiservice-cert\") pod \"infra-operator-controller-manager-66f4f5476c-xqnxq\" (UID: \"2779e724-225f-4a5f-9e2c-3b05fe08dff2\") " pod="openstack-operators/infra-operator-controller-manager-66f4f5476c-xqnxq" Jan 29 16:25:13 crc kubenswrapper[4714]: I0129 16:25:13.689217 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2779e724-225f-4a5f-9e2c-3b05fe08dff2-webhook-cert\") pod \"infra-operator-controller-manager-66f4f5476c-xqnxq\" (UID: \"2779e724-225f-4a5f-9e2c-3b05fe08dff2\") " pod="openstack-operators/infra-operator-controller-manager-66f4f5476c-xqnxq" Jan 29 16:25:13 crc kubenswrapper[4714]: I0129 16:25:13.689286 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jxvt\" (UniqueName: \"kubernetes.io/projected/2779e724-225f-4a5f-9e2c-3b05fe08dff2-kube-api-access-6jxvt\") pod \"infra-operator-controller-manager-66f4f5476c-xqnxq\" (UID: \"2779e724-225f-4a5f-9e2c-3b05fe08dff2\") " pod="openstack-operators/infra-operator-controller-manager-66f4f5476c-xqnxq" Jan 29 16:25:13 crc kubenswrapper[4714]: I0129 16:25:13.790010 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jxvt\" (UniqueName: \"kubernetes.io/projected/2779e724-225f-4a5f-9e2c-3b05fe08dff2-kube-api-access-6jxvt\") pod \"infra-operator-controller-manager-66f4f5476c-xqnxq\" (UID: \"2779e724-225f-4a5f-9e2c-3b05fe08dff2\") " pod="openstack-operators/infra-operator-controller-manager-66f4f5476c-xqnxq" Jan 29 16:25:13 crc kubenswrapper[4714]: I0129 16:25:13.790105 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2779e724-225f-4a5f-9e2c-3b05fe08dff2-apiservice-cert\") pod \"infra-operator-controller-manager-66f4f5476c-xqnxq\" (UID: \"2779e724-225f-4a5f-9e2c-3b05fe08dff2\") " pod="openstack-operators/infra-operator-controller-manager-66f4f5476c-xqnxq" Jan 29 16:25:13 crc kubenswrapper[4714]: I0129 16:25:13.790138 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2779e724-225f-4a5f-9e2c-3b05fe08dff2-webhook-cert\") pod \"infra-operator-controller-manager-66f4f5476c-xqnxq\" (UID: \"2779e724-225f-4a5f-9e2c-3b05fe08dff2\") " pod="openstack-operators/infra-operator-controller-manager-66f4f5476c-xqnxq" Jan 29 16:25:13 crc kubenswrapper[4714]: I0129 16:25:13.795662 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2779e724-225f-4a5f-9e2c-3b05fe08dff2-webhook-cert\") pod \"infra-operator-controller-manager-66f4f5476c-xqnxq\" (UID: \"2779e724-225f-4a5f-9e2c-3b05fe08dff2\") " pod="openstack-operators/infra-operator-controller-manager-66f4f5476c-xqnxq" Jan 29 16:25:13 crc kubenswrapper[4714]: I0129 16:25:13.795695 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2779e724-225f-4a5f-9e2c-3b05fe08dff2-apiservice-cert\") pod \"infra-operator-controller-manager-66f4f5476c-xqnxq\" (UID: \"2779e724-225f-4a5f-9e2c-3b05fe08dff2\") " pod="openstack-operators/infra-operator-controller-manager-66f4f5476c-xqnxq" Jan 29 16:25:13 crc kubenswrapper[4714]: I0129 16:25:13.810114 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jxvt\" (UniqueName: \"kubernetes.io/projected/2779e724-225f-4a5f-9e2c-3b05fe08dff2-kube-api-access-6jxvt\") pod \"infra-operator-controller-manager-66f4f5476c-xqnxq\" (UID: \"2779e724-225f-4a5f-9e2c-3b05fe08dff2\") " pod="openstack-operators/infra-operator-controller-manager-66f4f5476c-xqnxq" Jan 29 16:25:13 crc kubenswrapper[4714]: I0129 16:25:13.868470 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-66f4f5476c-xqnxq" Jan 29 16:25:14 crc kubenswrapper[4714]: I0129 16:25:14.274803 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-66f4f5476c-xqnxq"] Jan 29 16:25:14 crc kubenswrapper[4714]: I0129 16:25:14.415157 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-66f4f5476c-xqnxq" event={"ID":"2779e724-225f-4a5f-9e2c-3b05fe08dff2","Type":"ContainerStarted","Data":"a29d14b8a75b07c5bbb8bc497bdaef3d2bf58ebfc2cc63259d7bb78d82a74639"} Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.283817 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/openstack-galera-0"] Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.286489 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-0" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.291334 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"galera-openstack-dockercfg-qdqvq" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.291424 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"openstack-config-data" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.291570 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"openstack-scripts" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.292463 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"kube-root-ca.crt" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.292556 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"openshift-service-ca.crt" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.303249 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/openstack-galera-0"] Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.308372 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/openstack-galera-1"] Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.309444 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-1" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.314975 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/openstack-galera-2"] Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.316410 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-2" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.325532 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/openstack-galera-1"] Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.331334 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/openstack-galera-2"] Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.447888 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"e27f02c1-a7d5-4d49-838b-df5445720a07\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.447980 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f8d336f2-b190-4e32-be3a-27fbf0e50a06-kolla-config\") pod \"openstack-galera-1\" (UID: \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.448021 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-2\" (UID: \"e367e739-45d9-4c71-82fa-ecda02da3277\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.448057 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e367e739-45d9-4c71-82fa-ecda02da3277-kolla-config\") pod \"openstack-galera-2\" (UID: \"e367e739-45d9-4c71-82fa-ecda02da3277\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.448083 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f8d336f2-b190-4e32-be3a-27fbf0e50a06-config-data-generated\") pod \"openstack-galera-1\" (UID: \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.448103 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e27f02c1-a7d5-4d49-838b-df5445720a07-kolla-config\") pod \"openstack-galera-0\" (UID: \"e27f02c1-a7d5-4d49-838b-df5445720a07\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.448126 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e27f02c1-a7d5-4d49-838b-df5445720a07-config-data-default\") pod \"openstack-galera-0\" (UID: \"e27f02c1-a7d5-4d49-838b-df5445720a07\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.448157 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-1\" (UID: \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.448301 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8d336f2-b190-4e32-be3a-27fbf0e50a06-operator-scripts\") pod \"openstack-galera-1\" (UID: \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.448364 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e27f02c1-a7d5-4d49-838b-df5445720a07-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e27f02c1-a7d5-4d49-838b-df5445720a07\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.448389 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e367e739-45d9-4c71-82fa-ecda02da3277-config-data-generated\") pod \"openstack-galera-2\" (UID: \"e367e739-45d9-4c71-82fa-ecda02da3277\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.448460 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhbgj\" (UniqueName: \"kubernetes.io/projected/f8d336f2-b190-4e32-be3a-27fbf0e50a06-kube-api-access-lhbgj\") pod \"openstack-galera-1\" (UID: \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.448490 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6vvb\" (UniqueName: \"kubernetes.io/projected/e27f02c1-a7d5-4d49-838b-df5445720a07-kube-api-access-p6vvb\") pod \"openstack-galera-0\" (UID: \"e27f02c1-a7d5-4d49-838b-df5445720a07\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.448506 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nmlf\" (UniqueName: \"kubernetes.io/projected/e367e739-45d9-4c71-82fa-ecda02da3277-kube-api-access-8nmlf\") pod \"openstack-galera-2\" (UID: \"e367e739-45d9-4c71-82fa-ecda02da3277\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.448527 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f8d336f2-b190-4e32-be3a-27fbf0e50a06-config-data-default\") pod \"openstack-galera-1\" (UID: \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.448769 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e367e739-45d9-4c71-82fa-ecda02da3277-operator-scripts\") pod \"openstack-galera-2\" (UID: \"e367e739-45d9-4c71-82fa-ecda02da3277\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.448824 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e367e739-45d9-4c71-82fa-ecda02da3277-config-data-default\") pod \"openstack-galera-2\" (UID: \"e367e739-45d9-4c71-82fa-ecda02da3277\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.448860 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e27f02c1-a7d5-4d49-838b-df5445720a07-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e27f02c1-a7d5-4d49-838b-df5445720a07\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.550274 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8d336f2-b190-4e32-be3a-27fbf0e50a06-operator-scripts\") pod \"openstack-galera-1\" (UID: \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.550318 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e27f02c1-a7d5-4d49-838b-df5445720a07-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e27f02c1-a7d5-4d49-838b-df5445720a07\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.550342 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e367e739-45d9-4c71-82fa-ecda02da3277-config-data-generated\") pod \"openstack-galera-2\" (UID: \"e367e739-45d9-4c71-82fa-ecda02da3277\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.550371 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6vvb\" (UniqueName: \"kubernetes.io/projected/e27f02c1-a7d5-4d49-838b-df5445720a07-kube-api-access-p6vvb\") pod \"openstack-galera-0\" (UID: \"e27f02c1-a7d5-4d49-838b-df5445720a07\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.550389 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nmlf\" (UniqueName: \"kubernetes.io/projected/e367e739-45d9-4c71-82fa-ecda02da3277-kube-api-access-8nmlf\") pod \"openstack-galera-2\" (UID: \"e367e739-45d9-4c71-82fa-ecda02da3277\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.550407 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhbgj\" (UniqueName: \"kubernetes.io/projected/f8d336f2-b190-4e32-be3a-27fbf0e50a06-kube-api-access-lhbgj\") pod \"openstack-galera-1\" (UID: \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.550429 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f8d336f2-b190-4e32-be3a-27fbf0e50a06-config-data-default\") pod \"openstack-galera-1\" (UID: \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.550468 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e367e739-45d9-4c71-82fa-ecda02da3277-operator-scripts\") pod \"openstack-galera-2\" (UID: \"e367e739-45d9-4c71-82fa-ecda02da3277\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.550494 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e367e739-45d9-4c71-82fa-ecda02da3277-config-data-default\") pod \"openstack-galera-2\" (UID: \"e367e739-45d9-4c71-82fa-ecda02da3277\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.550516 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e27f02c1-a7d5-4d49-838b-df5445720a07-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e27f02c1-a7d5-4d49-838b-df5445720a07\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.550541 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"e27f02c1-a7d5-4d49-838b-df5445720a07\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.550560 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f8d336f2-b190-4e32-be3a-27fbf0e50a06-kolla-config\") pod \"openstack-galera-1\" (UID: \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.550595 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-2\" (UID: \"e367e739-45d9-4c71-82fa-ecda02da3277\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.550620 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e367e739-45d9-4c71-82fa-ecda02da3277-kolla-config\") pod \"openstack-galera-2\" (UID: \"e367e739-45d9-4c71-82fa-ecda02da3277\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.550641 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e27f02c1-a7d5-4d49-838b-df5445720a07-kolla-config\") pod \"openstack-galera-0\" (UID: \"e27f02c1-a7d5-4d49-838b-df5445720a07\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.550661 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f8d336f2-b190-4e32-be3a-27fbf0e50a06-config-data-generated\") pod \"openstack-galera-1\" (UID: \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.550682 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e27f02c1-a7d5-4d49-838b-df5445720a07-config-data-default\") pod \"openstack-galera-0\" (UID: \"e27f02c1-a7d5-4d49-838b-df5445720a07\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.550708 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-1\" (UID: \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.551588 4714 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-1\" (UID: \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\") device mount path \"/mnt/openstack/pv04\"" pod="cinder-kuttl-tests/openstack-galera-1" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.551591 4714 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"e27f02c1-a7d5-4d49-838b-df5445720a07\") device mount path \"/mnt/openstack/pv01\"" pod="cinder-kuttl-tests/openstack-galera-0" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.551622 4714 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-2\" (UID: \"e367e739-45d9-4c71-82fa-ecda02da3277\") device mount path \"/mnt/openstack/pv10\"" pod="cinder-kuttl-tests/openstack-galera-2" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.556368 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e27f02c1-a7d5-4d49-838b-df5445720a07-kolla-config\") pod \"openstack-galera-0\" (UID: \"e27f02c1-a7d5-4d49-838b-df5445720a07\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.556832 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f8d336f2-b190-4e32-be3a-27fbf0e50a06-config-data-default\") pod \"openstack-galera-1\" (UID: \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.556922 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f8d336f2-b190-4e32-be3a-27fbf0e50a06-kolla-config\") pod \"openstack-galera-1\" (UID: \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.557112 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f8d336f2-b190-4e32-be3a-27fbf0e50a06-config-data-generated\") pod \"openstack-galera-1\" (UID: \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.557195 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e27f02c1-a7d5-4d49-838b-df5445720a07-config-data-default\") pod \"openstack-galera-0\" (UID: \"e27f02c1-a7d5-4d49-838b-df5445720a07\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.557382 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8d336f2-b190-4e32-be3a-27fbf0e50a06-operator-scripts\") pod \"openstack-galera-1\" (UID: \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.557562 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e367e739-45d9-4c71-82fa-ecda02da3277-config-data-generated\") pod \"openstack-galera-2\" (UID: \"e367e739-45d9-4c71-82fa-ecda02da3277\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.558346 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e27f02c1-a7d5-4d49-838b-df5445720a07-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e27f02c1-a7d5-4d49-838b-df5445720a07\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.559045 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e367e739-45d9-4c71-82fa-ecda02da3277-kolla-config\") pod \"openstack-galera-2\" (UID: \"e367e739-45d9-4c71-82fa-ecda02da3277\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.559648 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e27f02c1-a7d5-4d49-838b-df5445720a07-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e27f02c1-a7d5-4d49-838b-df5445720a07\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.559795 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e367e739-45d9-4c71-82fa-ecda02da3277-operator-scripts\") pod \"openstack-galera-2\" (UID: \"e367e739-45d9-4c71-82fa-ecda02da3277\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.570845 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nmlf\" (UniqueName: \"kubernetes.io/projected/e367e739-45d9-4c71-82fa-ecda02da3277-kube-api-access-8nmlf\") pod \"openstack-galera-2\" (UID: \"e367e739-45d9-4c71-82fa-ecda02da3277\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.572065 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"e27f02c1-a7d5-4d49-838b-df5445720a07\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.575247 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-1\" (UID: \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.577275 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhbgj\" (UniqueName: \"kubernetes.io/projected/f8d336f2-b190-4e32-be3a-27fbf0e50a06-kube-api-access-lhbgj\") pod \"openstack-galera-1\" (UID: \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.578958 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e367e739-45d9-4c71-82fa-ecda02da3277-config-data-default\") pod \"openstack-galera-2\" (UID: \"e367e739-45d9-4c71-82fa-ecda02da3277\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.580717 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6vvb\" (UniqueName: \"kubernetes.io/projected/e27f02c1-a7d5-4d49-838b-df5445720a07-kube-api-access-p6vvb\") pod \"openstack-galera-0\" (UID: \"e27f02c1-a7d5-4d49-838b-df5445720a07\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.583678 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-2\" (UID: \"e367e739-45d9-4c71-82fa-ecda02da3277\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.614835 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-0" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.635148 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-1" Jan 29 16:25:17 crc kubenswrapper[4714]: I0129 16:25:17.650587 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-2" Jan 29 16:25:18 crc kubenswrapper[4714]: I0129 16:25:18.059252 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/openstack-galera-0"] Jan 29 16:25:18 crc kubenswrapper[4714]: I0129 16:25:18.103587 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/openstack-galera-1"] Jan 29 16:25:18 crc kubenswrapper[4714]: W0129 16:25:18.107546 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8d336f2_b190_4e32_be3a_27fbf0e50a06.slice/crio-2730c5f20c68259dc37f00e3d986e43810a9d5fe85207a2c704b5656855e553b WatchSource:0}: Error finding container 2730c5f20c68259dc37f00e3d986e43810a9d5fe85207a2c704b5656855e553b: Status 404 returned error can't find the container with id 2730c5f20c68259dc37f00e3d986e43810a9d5fe85207a2c704b5656855e553b Jan 29 16:25:18 crc kubenswrapper[4714]: I0129 16:25:18.111496 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/openstack-galera-2"] Jan 29 16:25:18 crc kubenswrapper[4714]: W0129 16:25:18.116792 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode367e739_45d9_4c71_82fa_ecda02da3277.slice/crio-9165416a79a8d14934c00fc8e00a91ffd697d205964c3585f55278b965651da9 WatchSource:0}: Error finding container 9165416a79a8d14934c00fc8e00a91ffd697d205964c3585f55278b965651da9: Status 404 returned error can't find the container with id 9165416a79a8d14934c00fc8e00a91ffd697d205964c3585f55278b965651da9 Jan 29 16:25:18 crc kubenswrapper[4714]: I0129 16:25:18.442964 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-2" event={"ID":"e367e739-45d9-4c71-82fa-ecda02da3277","Type":"ContainerStarted","Data":"9165416a79a8d14934c00fc8e00a91ffd697d205964c3585f55278b965651da9"} Jan 29 16:25:18 crc kubenswrapper[4714]: I0129 16:25:18.444459 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-66f4f5476c-xqnxq" event={"ID":"2779e724-225f-4a5f-9e2c-3b05fe08dff2","Type":"ContainerStarted","Data":"cec1cafa23793d2e5f4bd3af8e35a522a2c8cc5c802408ae2ad9896bd189471e"} Jan 29 16:25:18 crc kubenswrapper[4714]: I0129 16:25:18.444638 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-66f4f5476c-xqnxq" Jan 29 16:25:18 crc kubenswrapper[4714]: I0129 16:25:18.445594 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-0" event={"ID":"e27f02c1-a7d5-4d49-838b-df5445720a07","Type":"ContainerStarted","Data":"37a7ed7cc71d5ba4399190ff48b8e2d70a326be5e9ad5c8773900669dfc3740e"} Jan 29 16:25:18 crc kubenswrapper[4714]: I0129 16:25:18.446793 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-1" event={"ID":"f8d336f2-b190-4e32-be3a-27fbf0e50a06","Type":"ContainerStarted","Data":"2730c5f20c68259dc37f00e3d986e43810a9d5fe85207a2c704b5656855e553b"} Jan 29 16:25:18 crc kubenswrapper[4714]: I0129 16:25:18.470431 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-66f4f5476c-xqnxq" podStartSLOduration=2.273415463 podStartE2EDuration="5.470401892s" podCreationTimestamp="2026-01-29 16:25:13 +0000 UTC" firstStartedPulling="2026-01-29 16:25:14.290221467 +0000 UTC m=+920.810722587" lastFinishedPulling="2026-01-29 16:25:17.487207906 +0000 UTC m=+924.007709016" observedRunningTime="2026-01-29 16:25:18.467824657 +0000 UTC m=+924.988325777" watchObservedRunningTime="2026-01-29 16:25:18.470401892 +0000 UTC m=+924.990903022" Jan 29 16:25:23 crc kubenswrapper[4714]: I0129 16:25:23.872593 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-66f4f5476c-xqnxq" Jan 29 16:25:27 crc kubenswrapper[4714]: I0129 16:25:27.366356 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/memcached-0"] Jan 29 16:25:27 crc kubenswrapper[4714]: I0129 16:25:27.367745 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/memcached-0" Jan 29 16:25:27 crc kubenswrapper[4714]: I0129 16:25:27.369672 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"memcached-config-data" Jan 29 16:25:27 crc kubenswrapper[4714]: I0129 16:25:27.370149 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"memcached-memcached-dockercfg-drt8f" Jan 29 16:25:27 crc kubenswrapper[4714]: I0129 16:25:27.380583 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/memcached-0"] Jan 29 16:25:27 crc kubenswrapper[4714]: I0129 16:25:27.516126 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea-kolla-config\") pod \"memcached-0\" (UID: \"d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea\") " pod="cinder-kuttl-tests/memcached-0" Jan 29 16:25:27 crc kubenswrapper[4714]: I0129 16:25:27.516185 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea-config-data\") pod \"memcached-0\" (UID: \"d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea\") " pod="cinder-kuttl-tests/memcached-0" Jan 29 16:25:27 crc kubenswrapper[4714]: I0129 16:25:27.516222 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkmzp\" (UniqueName: \"kubernetes.io/projected/d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea-kube-api-access-vkmzp\") pod \"memcached-0\" (UID: \"d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea\") " pod="cinder-kuttl-tests/memcached-0" Jan 29 16:25:27 crc kubenswrapper[4714]: I0129 16:25:27.617797 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea-kolla-config\") pod \"memcached-0\" (UID: \"d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea\") " pod="cinder-kuttl-tests/memcached-0" Jan 29 16:25:27 crc kubenswrapper[4714]: I0129 16:25:27.617892 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea-config-data\") pod \"memcached-0\" (UID: \"d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea\") " pod="cinder-kuttl-tests/memcached-0" Jan 29 16:25:27 crc kubenswrapper[4714]: I0129 16:25:27.618012 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkmzp\" (UniqueName: \"kubernetes.io/projected/d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea-kube-api-access-vkmzp\") pod \"memcached-0\" (UID: \"d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea\") " pod="cinder-kuttl-tests/memcached-0" Jan 29 16:25:27 crc kubenswrapper[4714]: I0129 16:25:27.618870 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea-config-data\") pod \"memcached-0\" (UID: \"d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea\") " pod="cinder-kuttl-tests/memcached-0" Jan 29 16:25:27 crc kubenswrapper[4714]: I0129 16:25:27.619245 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea-kolla-config\") pod \"memcached-0\" (UID: \"d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea\") " pod="cinder-kuttl-tests/memcached-0" Jan 29 16:25:27 crc kubenswrapper[4714]: I0129 16:25:27.657209 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkmzp\" (UniqueName: \"kubernetes.io/projected/d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea-kube-api-access-vkmzp\") pod \"memcached-0\" (UID: \"d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea\") " pod="cinder-kuttl-tests/memcached-0" Jan 29 16:25:27 crc kubenswrapper[4714]: I0129 16:25:27.684075 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/memcached-0" Jan 29 16:25:27 crc kubenswrapper[4714]: I0129 16:25:27.952818 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-794kb"] Jan 29 16:25:27 crc kubenswrapper[4714]: I0129 16:25:27.962120 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-794kb" Jan 29 16:25:27 crc kubenswrapper[4714]: I0129 16:25:27.965262 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-index-dockercfg-mqv6m" Jan 29 16:25:27 crc kubenswrapper[4714]: I0129 16:25:27.972754 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-794kb"] Jan 29 16:25:28 crc kubenswrapper[4714]: I0129 16:25:28.028851 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbp85\" (UniqueName: \"kubernetes.io/projected/f3272d6a-aac2-4e20-b996-28fc1980cd2e-kube-api-access-vbp85\") pod \"rabbitmq-cluster-operator-index-794kb\" (UID: \"f3272d6a-aac2-4e20-b996-28fc1980cd2e\") " pod="openstack-operators/rabbitmq-cluster-operator-index-794kb" Jan 29 16:25:28 crc kubenswrapper[4714]: I0129 16:25:28.131240 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbp85\" (UniqueName: \"kubernetes.io/projected/f3272d6a-aac2-4e20-b996-28fc1980cd2e-kube-api-access-vbp85\") pod \"rabbitmq-cluster-operator-index-794kb\" (UID: \"f3272d6a-aac2-4e20-b996-28fc1980cd2e\") " pod="openstack-operators/rabbitmq-cluster-operator-index-794kb" Jan 29 16:25:28 crc kubenswrapper[4714]: I0129 16:25:28.148763 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbp85\" (UniqueName: \"kubernetes.io/projected/f3272d6a-aac2-4e20-b996-28fc1980cd2e-kube-api-access-vbp85\") pod \"rabbitmq-cluster-operator-index-794kb\" (UID: \"f3272d6a-aac2-4e20-b996-28fc1980cd2e\") " pod="openstack-operators/rabbitmq-cluster-operator-index-794kb" Jan 29 16:25:28 crc kubenswrapper[4714]: I0129 16:25:28.208410 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/memcached-0"] Jan 29 16:25:28 crc kubenswrapper[4714]: I0129 16:25:28.305385 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-794kb" Jan 29 16:25:28 crc kubenswrapper[4714]: I0129 16:25:28.527960 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-2" event={"ID":"e367e739-45d9-4c71-82fa-ecda02da3277","Type":"ContainerStarted","Data":"ec556421e74871bfa9c85c53ff648a1a0691767027648a2904e427bc8f75360f"} Jan 29 16:25:28 crc kubenswrapper[4714]: I0129 16:25:28.533444 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/memcached-0" event={"ID":"d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea","Type":"ContainerStarted","Data":"5fcd1b55c77976e4d94c390473639a6b02e3c4a2129659d96d1e68f62ca74a39"} Jan 29 16:25:28 crc kubenswrapper[4714]: I0129 16:25:28.805826 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-794kb"] Jan 29 16:25:29 crc kubenswrapper[4714]: I0129 16:25:29.550578 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-794kb" event={"ID":"f3272d6a-aac2-4e20-b996-28fc1980cd2e","Type":"ContainerStarted","Data":"dc124fe5adeea1043a217db158112150ae74a2240fb76a192af1e43bb85ff258"} Jan 29 16:25:29 crc kubenswrapper[4714]: I0129 16:25:29.551955 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-0" event={"ID":"e27f02c1-a7d5-4d49-838b-df5445720a07","Type":"ContainerStarted","Data":"2c8a621ff05274e374040ac76aa490fcc2858782c45b93e547a0c03c210a4530"} Jan 29 16:25:30 crc kubenswrapper[4714]: I0129 16:25:30.560817 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-1" event={"ID":"f8d336f2-b190-4e32-be3a-27fbf0e50a06","Type":"ContainerStarted","Data":"7b624009e8962fd057296e2a9f997c5b0aab61c294b85ef5b94d41ebe8dd89e7"} Jan 29 16:25:32 crc kubenswrapper[4714]: I0129 16:25:32.160289 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-794kb"] Jan 29 16:25:32 crc kubenswrapper[4714]: I0129 16:25:32.746155 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-ddf2f"] Jan 29 16:25:32 crc kubenswrapper[4714]: I0129 16:25:32.746808 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-ddf2f" Jan 29 16:25:32 crc kubenswrapper[4714]: I0129 16:25:32.769064 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-ddf2f"] Jan 29 16:25:32 crc kubenswrapper[4714]: I0129 16:25:32.786101 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v42s4\" (UniqueName: \"kubernetes.io/projected/e6d50b97-e5e2-426e-b881-dfb2077c0838-kube-api-access-v42s4\") pod \"rabbitmq-cluster-operator-index-ddf2f\" (UID: \"e6d50b97-e5e2-426e-b881-dfb2077c0838\") " pod="openstack-operators/rabbitmq-cluster-operator-index-ddf2f" Jan 29 16:25:32 crc kubenswrapper[4714]: I0129 16:25:32.887205 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v42s4\" (UniqueName: \"kubernetes.io/projected/e6d50b97-e5e2-426e-b881-dfb2077c0838-kube-api-access-v42s4\") pod \"rabbitmq-cluster-operator-index-ddf2f\" (UID: \"e6d50b97-e5e2-426e-b881-dfb2077c0838\") " pod="openstack-operators/rabbitmq-cluster-operator-index-ddf2f" Jan 29 16:25:32 crc kubenswrapper[4714]: I0129 16:25:32.929547 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v42s4\" (UniqueName: \"kubernetes.io/projected/e6d50b97-e5e2-426e-b881-dfb2077c0838-kube-api-access-v42s4\") pod \"rabbitmq-cluster-operator-index-ddf2f\" (UID: \"e6d50b97-e5e2-426e-b881-dfb2077c0838\") " pod="openstack-operators/rabbitmq-cluster-operator-index-ddf2f" Jan 29 16:25:33 crc kubenswrapper[4714]: I0129 16:25:33.082598 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-ddf2f" Jan 29 16:25:33 crc kubenswrapper[4714]: I0129 16:25:33.536047 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-ddf2f"] Jan 29 16:25:33 crc kubenswrapper[4714]: W0129 16:25:33.546103 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6d50b97_e5e2_426e_b881_dfb2077c0838.slice/crio-74d0d3468956b4f9354638adf80a846fa26535a7e74e370eeb721012c70bb9d7 WatchSource:0}: Error finding container 74d0d3468956b4f9354638adf80a846fa26535a7e74e370eeb721012c70bb9d7: Status 404 returned error can't find the container with id 74d0d3468956b4f9354638adf80a846fa26535a7e74e370eeb721012c70bb9d7 Jan 29 16:25:33 crc kubenswrapper[4714]: I0129 16:25:33.578686 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-ddf2f" event={"ID":"e6d50b97-e5e2-426e-b881-dfb2077c0838","Type":"ContainerStarted","Data":"74d0d3468956b4f9354638adf80a846fa26535a7e74e370eeb721012c70bb9d7"} Jan 29 16:25:33 crc kubenswrapper[4714]: I0129 16:25:33.585478 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/memcached-0" event={"ID":"d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea","Type":"ContainerStarted","Data":"5244ecc09d3c74f3a195f77af7759f9d242093032f09d5208383b4619dda295d"} Jan 29 16:25:33 crc kubenswrapper[4714]: I0129 16:25:33.585619 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/memcached-0" Jan 29 16:25:33 crc kubenswrapper[4714]: I0129 16:25:33.604518 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/memcached-0" podStartSLOduration=1.663254696 podStartE2EDuration="6.604500575s" podCreationTimestamp="2026-01-29 16:25:27 +0000 UTC" firstStartedPulling="2026-01-29 16:25:28.241387713 +0000 UTC m=+934.761888833" lastFinishedPulling="2026-01-29 16:25:33.182633592 +0000 UTC m=+939.703134712" observedRunningTime="2026-01-29 16:25:33.601494507 +0000 UTC m=+940.121995637" watchObservedRunningTime="2026-01-29 16:25:33.604500575 +0000 UTC m=+940.125001695" Jan 29 16:25:35 crc kubenswrapper[4714]: I0129 16:25:35.607096 4714 generic.go:334] "Generic (PLEG): container finished" podID="e27f02c1-a7d5-4d49-838b-df5445720a07" containerID="2c8a621ff05274e374040ac76aa490fcc2858782c45b93e547a0c03c210a4530" exitCode=0 Jan 29 16:25:35 crc kubenswrapper[4714]: I0129 16:25:35.607174 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-0" event={"ID":"e27f02c1-a7d5-4d49-838b-df5445720a07","Type":"ContainerDied","Data":"2c8a621ff05274e374040ac76aa490fcc2858782c45b93e547a0c03c210a4530"} Jan 29 16:25:35 crc kubenswrapper[4714]: I0129 16:25:35.610149 4714 generic.go:334] "Generic (PLEG): container finished" podID="f8d336f2-b190-4e32-be3a-27fbf0e50a06" containerID="7b624009e8962fd057296e2a9f997c5b0aab61c294b85ef5b94d41ebe8dd89e7" exitCode=0 Jan 29 16:25:35 crc kubenswrapper[4714]: I0129 16:25:35.610243 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-1" event={"ID":"f8d336f2-b190-4e32-be3a-27fbf0e50a06","Type":"ContainerDied","Data":"7b624009e8962fd057296e2a9f997c5b0aab61c294b85ef5b94d41ebe8dd89e7"} Jan 29 16:25:35 crc kubenswrapper[4714]: I0129 16:25:35.612375 4714 generic.go:334] "Generic (PLEG): container finished" podID="e367e739-45d9-4c71-82fa-ecda02da3277" containerID="ec556421e74871bfa9c85c53ff648a1a0691767027648a2904e427bc8f75360f" exitCode=0 Jan 29 16:25:35 crc kubenswrapper[4714]: I0129 16:25:35.612414 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-2" event={"ID":"e367e739-45d9-4c71-82fa-ecda02da3277","Type":"ContainerDied","Data":"ec556421e74871bfa9c85c53ff648a1a0691767027648a2904e427bc8f75360f"} Jan 29 16:25:36 crc kubenswrapper[4714]: I0129 16:25:36.620627 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-0" event={"ID":"e27f02c1-a7d5-4d49-838b-df5445720a07","Type":"ContainerStarted","Data":"b027a08741bc7af4590803c43740e388681a5f71be01b0ea2974650f0e3e74fc"} Jan 29 16:25:36 crc kubenswrapper[4714]: I0129 16:25:36.622452 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-ddf2f" event={"ID":"e6d50b97-e5e2-426e-b881-dfb2077c0838","Type":"ContainerStarted","Data":"f6b5016e56cc05cedf6153ecd27ad5475f41f09c64ae9c2dcbdb602e86bd0445"} Jan 29 16:25:36 crc kubenswrapper[4714]: I0129 16:25:36.624576 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-1" event={"ID":"f8d336f2-b190-4e32-be3a-27fbf0e50a06","Type":"ContainerStarted","Data":"83c3eb1ecb12cd8202e4a2f2b14330aa7199092be01cfe200e08827657c44a8b"} Jan 29 16:25:36 crc kubenswrapper[4714]: I0129 16:25:36.626748 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-794kb" event={"ID":"f3272d6a-aac2-4e20-b996-28fc1980cd2e","Type":"ContainerStarted","Data":"d4336bc35a9f3b5419f76f6a79f33203bd10152a030a0548f63621c695133a4c"} Jan 29 16:25:36 crc kubenswrapper[4714]: I0129 16:25:36.626849 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-index-794kb" podUID="f3272d6a-aac2-4e20-b996-28fc1980cd2e" containerName="registry-server" containerID="cri-o://d4336bc35a9f3b5419f76f6a79f33203bd10152a030a0548f63621c695133a4c" gracePeriod=2 Jan 29 16:25:36 crc kubenswrapper[4714]: I0129 16:25:36.629267 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-2" event={"ID":"e367e739-45d9-4c71-82fa-ecda02da3277","Type":"ContainerStarted","Data":"6653bc0073fbdde40468fb06b953c1302846763f2888418b4fc21509ef3915d2"} Jan 29 16:25:36 crc kubenswrapper[4714]: I0129 16:25:36.646524 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/openstack-galera-0" podStartSLOduration=10.499287299 podStartE2EDuration="20.646506011s" podCreationTimestamp="2026-01-29 16:25:16 +0000 UTC" firstStartedPulling="2026-01-29 16:25:18.062804216 +0000 UTC m=+924.583305336" lastFinishedPulling="2026-01-29 16:25:28.210022928 +0000 UTC m=+934.730524048" observedRunningTime="2026-01-29 16:25:36.641401232 +0000 UTC m=+943.161902352" watchObservedRunningTime="2026-01-29 16:25:36.646506011 +0000 UTC m=+943.167007131" Jan 29 16:25:36 crc kubenswrapper[4714]: I0129 16:25:36.662470 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-ddf2f" podStartSLOduration=2.3995356 podStartE2EDuration="4.662447836s" podCreationTimestamp="2026-01-29 16:25:32 +0000 UTC" firstStartedPulling="2026-01-29 16:25:33.547423399 +0000 UTC m=+940.067924519" lastFinishedPulling="2026-01-29 16:25:35.810335635 +0000 UTC m=+942.330836755" observedRunningTime="2026-01-29 16:25:36.657842721 +0000 UTC m=+943.178343841" watchObservedRunningTime="2026-01-29 16:25:36.662447836 +0000 UTC m=+943.182948956" Jan 29 16:25:36 crc kubenswrapper[4714]: I0129 16:25:36.682124 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/openstack-galera-1" podStartSLOduration=9.28406374 podStartE2EDuration="20.68211016s" podCreationTimestamp="2026-01-29 16:25:16 +0000 UTC" firstStartedPulling="2026-01-29 16:25:18.110086206 +0000 UTC m=+924.630587316" lastFinishedPulling="2026-01-29 16:25:29.508132616 +0000 UTC m=+936.028633736" observedRunningTime="2026-01-29 16:25:36.678418582 +0000 UTC m=+943.198919712" watchObservedRunningTime="2026-01-29 16:25:36.68211016 +0000 UTC m=+943.202611280" Jan 29 16:25:36 crc kubenswrapper[4714]: I0129 16:25:36.701298 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/openstack-galera-2" podStartSLOduration=10.672422561 podStartE2EDuration="20.701283239s" podCreationTimestamp="2026-01-29 16:25:16 +0000 UTC" firstStartedPulling="2026-01-29 16:25:18.119177351 +0000 UTC m=+924.639678471" lastFinishedPulling="2026-01-29 16:25:28.148038029 +0000 UTC m=+934.668539149" observedRunningTime="2026-01-29 16:25:36.697686524 +0000 UTC m=+943.218187644" watchObservedRunningTime="2026-01-29 16:25:36.701283239 +0000 UTC m=+943.221784359" Jan 29 16:25:36 crc kubenswrapper[4714]: I0129 16:25:36.722466 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-794kb" podStartSLOduration=2.7445488559999998 podStartE2EDuration="9.722446597s" podCreationTimestamp="2026-01-29 16:25:27 +0000 UTC" firstStartedPulling="2026-01-29 16:25:28.817355214 +0000 UTC m=+935.337856334" lastFinishedPulling="2026-01-29 16:25:35.795252955 +0000 UTC m=+942.315754075" observedRunningTime="2026-01-29 16:25:36.718541633 +0000 UTC m=+943.239042753" watchObservedRunningTime="2026-01-29 16:25:36.722446597 +0000 UTC m=+943.242947717" Jan 29 16:25:37 crc kubenswrapper[4714]: I0129 16:25:37.077522 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-794kb" Jan 29 16:25:37 crc kubenswrapper[4714]: I0129 16:25:37.154291 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbp85\" (UniqueName: \"kubernetes.io/projected/f3272d6a-aac2-4e20-b996-28fc1980cd2e-kube-api-access-vbp85\") pod \"f3272d6a-aac2-4e20-b996-28fc1980cd2e\" (UID: \"f3272d6a-aac2-4e20-b996-28fc1980cd2e\") " Jan 29 16:25:37 crc kubenswrapper[4714]: I0129 16:25:37.160047 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3272d6a-aac2-4e20-b996-28fc1980cd2e-kube-api-access-vbp85" (OuterVolumeSpecName: "kube-api-access-vbp85") pod "f3272d6a-aac2-4e20-b996-28fc1980cd2e" (UID: "f3272d6a-aac2-4e20-b996-28fc1980cd2e"). InnerVolumeSpecName "kube-api-access-vbp85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:25:37 crc kubenswrapper[4714]: I0129 16:25:37.255953 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbp85\" (UniqueName: \"kubernetes.io/projected/f3272d6a-aac2-4e20-b996-28fc1980cd2e-kube-api-access-vbp85\") on node \"crc\" DevicePath \"\"" Jan 29 16:25:37 crc kubenswrapper[4714]: I0129 16:25:37.616508 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/openstack-galera-0" Jan 29 16:25:37 crc kubenswrapper[4714]: I0129 16:25:37.616817 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/openstack-galera-0" Jan 29 16:25:37 crc kubenswrapper[4714]: I0129 16:25:37.635782 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/openstack-galera-1" Jan 29 16:25:37 crc kubenswrapper[4714]: I0129 16:25:37.635829 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/openstack-galera-1" Jan 29 16:25:37 crc kubenswrapper[4714]: I0129 16:25:37.637309 4714 generic.go:334] "Generic (PLEG): container finished" podID="f3272d6a-aac2-4e20-b996-28fc1980cd2e" containerID="d4336bc35a9f3b5419f76f6a79f33203bd10152a030a0548f63621c695133a4c" exitCode=0 Jan 29 16:25:37 crc kubenswrapper[4714]: I0129 16:25:37.637352 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-794kb" Jan 29 16:25:37 crc kubenswrapper[4714]: I0129 16:25:37.637352 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-794kb" event={"ID":"f3272d6a-aac2-4e20-b996-28fc1980cd2e","Type":"ContainerDied","Data":"d4336bc35a9f3b5419f76f6a79f33203bd10152a030a0548f63621c695133a4c"} Jan 29 16:25:37 crc kubenswrapper[4714]: I0129 16:25:37.637384 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-794kb" event={"ID":"f3272d6a-aac2-4e20-b996-28fc1980cd2e","Type":"ContainerDied","Data":"dc124fe5adeea1043a217db158112150ae74a2240fb76a192af1e43bb85ff258"} Jan 29 16:25:37 crc kubenswrapper[4714]: I0129 16:25:37.637400 4714 scope.go:117] "RemoveContainer" containerID="d4336bc35a9f3b5419f76f6a79f33203bd10152a030a0548f63621c695133a4c" Jan 29 16:25:37 crc kubenswrapper[4714]: I0129 16:25:37.650731 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/openstack-galera-2" Jan 29 16:25:37 crc kubenswrapper[4714]: I0129 16:25:37.650770 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/openstack-galera-2" Jan 29 16:25:37 crc kubenswrapper[4714]: I0129 16:25:37.655980 4714 scope.go:117] "RemoveContainer" containerID="d4336bc35a9f3b5419f76f6a79f33203bd10152a030a0548f63621c695133a4c" Jan 29 16:25:37 crc kubenswrapper[4714]: E0129 16:25:37.656384 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4336bc35a9f3b5419f76f6a79f33203bd10152a030a0548f63621c695133a4c\": container with ID starting with d4336bc35a9f3b5419f76f6a79f33203bd10152a030a0548f63621c695133a4c not found: ID does not exist" containerID="d4336bc35a9f3b5419f76f6a79f33203bd10152a030a0548f63621c695133a4c" Jan 29 16:25:37 crc kubenswrapper[4714]: I0129 16:25:37.656422 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4336bc35a9f3b5419f76f6a79f33203bd10152a030a0548f63621c695133a4c"} err="failed to get container status \"d4336bc35a9f3b5419f76f6a79f33203bd10152a030a0548f63621c695133a4c\": rpc error: code = NotFound desc = could not find container \"d4336bc35a9f3b5419f76f6a79f33203bd10152a030a0548f63621c695133a4c\": container with ID starting with d4336bc35a9f3b5419f76f6a79f33203bd10152a030a0548f63621c695133a4c not found: ID does not exist" Jan 29 16:25:37 crc kubenswrapper[4714]: I0129 16:25:37.668274 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-794kb"] Jan 29 16:25:37 crc kubenswrapper[4714]: I0129 16:25:37.671981 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-794kb"] Jan 29 16:25:38 crc kubenswrapper[4714]: I0129 16:25:38.191128 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3272d6a-aac2-4e20-b996-28fc1980cd2e" path="/var/lib/kubelet/pods/f3272d6a-aac2-4e20-b996-28fc1980cd2e/volumes" Jan 29 16:25:39 crc kubenswrapper[4714]: I0129 16:25:39.153088 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lmqpk"] Jan 29 16:25:39 crc kubenswrapper[4714]: E0129 16:25:39.153296 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3272d6a-aac2-4e20-b996-28fc1980cd2e" containerName="registry-server" Jan 29 16:25:39 crc kubenswrapper[4714]: I0129 16:25:39.153307 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3272d6a-aac2-4e20-b996-28fc1980cd2e" containerName="registry-server" Jan 29 16:25:39 crc kubenswrapper[4714]: I0129 16:25:39.153417 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3272d6a-aac2-4e20-b996-28fc1980cd2e" containerName="registry-server" Jan 29 16:25:39 crc kubenswrapper[4714]: I0129 16:25:39.154160 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lmqpk" Jan 29 16:25:39 crc kubenswrapper[4714]: I0129 16:25:39.169094 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lmqpk"] Jan 29 16:25:39 crc kubenswrapper[4714]: I0129 16:25:39.302338 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fppdq\" (UniqueName: \"kubernetes.io/projected/230dbe44-24bd-4a95-9f71-7ee36bb74cce-kube-api-access-fppdq\") pod \"community-operators-lmqpk\" (UID: \"230dbe44-24bd-4a95-9f71-7ee36bb74cce\") " pod="openshift-marketplace/community-operators-lmqpk" Jan 29 16:25:39 crc kubenswrapper[4714]: I0129 16:25:39.302420 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/230dbe44-24bd-4a95-9f71-7ee36bb74cce-catalog-content\") pod \"community-operators-lmqpk\" (UID: \"230dbe44-24bd-4a95-9f71-7ee36bb74cce\") " pod="openshift-marketplace/community-operators-lmqpk" Jan 29 16:25:39 crc kubenswrapper[4714]: I0129 16:25:39.302490 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/230dbe44-24bd-4a95-9f71-7ee36bb74cce-utilities\") pod \"community-operators-lmqpk\" (UID: \"230dbe44-24bd-4a95-9f71-7ee36bb74cce\") " pod="openshift-marketplace/community-operators-lmqpk" Jan 29 16:25:39 crc kubenswrapper[4714]: I0129 16:25:39.404261 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/230dbe44-24bd-4a95-9f71-7ee36bb74cce-catalog-content\") pod \"community-operators-lmqpk\" (UID: \"230dbe44-24bd-4a95-9f71-7ee36bb74cce\") " pod="openshift-marketplace/community-operators-lmqpk" Jan 29 16:25:39 crc kubenswrapper[4714]: I0129 16:25:39.404711 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/230dbe44-24bd-4a95-9f71-7ee36bb74cce-utilities\") pod \"community-operators-lmqpk\" (UID: \"230dbe44-24bd-4a95-9f71-7ee36bb74cce\") " pod="openshift-marketplace/community-operators-lmqpk" Jan 29 16:25:39 crc kubenswrapper[4714]: I0129 16:25:39.404751 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fppdq\" (UniqueName: \"kubernetes.io/projected/230dbe44-24bd-4a95-9f71-7ee36bb74cce-kube-api-access-fppdq\") pod \"community-operators-lmqpk\" (UID: \"230dbe44-24bd-4a95-9f71-7ee36bb74cce\") " pod="openshift-marketplace/community-operators-lmqpk" Jan 29 16:25:39 crc kubenswrapper[4714]: I0129 16:25:39.404833 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/230dbe44-24bd-4a95-9f71-7ee36bb74cce-catalog-content\") pod \"community-operators-lmqpk\" (UID: \"230dbe44-24bd-4a95-9f71-7ee36bb74cce\") " pod="openshift-marketplace/community-operators-lmqpk" Jan 29 16:25:39 crc kubenswrapper[4714]: I0129 16:25:39.405354 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/230dbe44-24bd-4a95-9f71-7ee36bb74cce-utilities\") pod \"community-operators-lmqpk\" (UID: \"230dbe44-24bd-4a95-9f71-7ee36bb74cce\") " pod="openshift-marketplace/community-operators-lmqpk" Jan 29 16:25:39 crc kubenswrapper[4714]: I0129 16:25:39.426127 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fppdq\" (UniqueName: \"kubernetes.io/projected/230dbe44-24bd-4a95-9f71-7ee36bb74cce-kube-api-access-fppdq\") pod \"community-operators-lmqpk\" (UID: \"230dbe44-24bd-4a95-9f71-7ee36bb74cce\") " pod="openshift-marketplace/community-operators-lmqpk" Jan 29 16:25:39 crc kubenswrapper[4714]: I0129 16:25:39.470738 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lmqpk" Jan 29 16:25:39 crc kubenswrapper[4714]: I0129 16:25:39.919053 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lmqpk"] Jan 29 16:25:39 crc kubenswrapper[4714]: W0129 16:25:39.925213 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod230dbe44_24bd_4a95_9f71_7ee36bb74cce.slice/crio-b963773a81de472dcc9c53a373aee4fe097bee647f3f7cf04a8f2ef907468156 WatchSource:0}: Error finding container b963773a81de472dcc9c53a373aee4fe097bee647f3f7cf04a8f2ef907468156: Status 404 returned error can't find the container with id b963773a81de472dcc9c53a373aee4fe097bee647f3f7cf04a8f2ef907468156 Jan 29 16:25:40 crc kubenswrapper[4714]: I0129 16:25:40.676547 4714 generic.go:334] "Generic (PLEG): container finished" podID="230dbe44-24bd-4a95-9f71-7ee36bb74cce" containerID="5b730a9dfe53ac182298170cbbd5116f09f6dc7bc63b9980a70b7d90c7997752" exitCode=0 Jan 29 16:25:40 crc kubenswrapper[4714]: I0129 16:25:40.676604 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmqpk" event={"ID":"230dbe44-24bd-4a95-9f71-7ee36bb74cce","Type":"ContainerDied","Data":"5b730a9dfe53ac182298170cbbd5116f09f6dc7bc63b9980a70b7d90c7997752"} Jan 29 16:25:40 crc kubenswrapper[4714]: I0129 16:25:40.676843 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmqpk" event={"ID":"230dbe44-24bd-4a95-9f71-7ee36bb74cce","Type":"ContainerStarted","Data":"b963773a81de472dcc9c53a373aee4fe097bee647f3f7cf04a8f2ef907468156"} Jan 29 16:25:42 crc kubenswrapper[4714]: I0129 16:25:42.685139 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/memcached-0" Jan 29 16:25:43 crc kubenswrapper[4714]: I0129 16:25:43.086059 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/rabbitmq-cluster-operator-index-ddf2f" Jan 29 16:25:43 crc kubenswrapper[4714]: I0129 16:25:43.086119 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/rabbitmq-cluster-operator-index-ddf2f" Jan 29 16:25:43 crc kubenswrapper[4714]: I0129 16:25:43.127860 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/rabbitmq-cluster-operator-index-ddf2f" Jan 29 16:25:43 crc kubenswrapper[4714]: I0129 16:25:43.713550 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/rabbitmq-cluster-operator-index-ddf2f" Jan 29 16:25:46 crc kubenswrapper[4714]: I0129 16:25:46.711226 4714 generic.go:334] "Generic (PLEG): container finished" podID="230dbe44-24bd-4a95-9f71-7ee36bb74cce" containerID="d22cb237e69d1d81bb2cf37fadc3bb5651ed34bb7ad52432494b2ea7410d53d9" exitCode=0 Jan 29 16:25:46 crc kubenswrapper[4714]: I0129 16:25:46.711324 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmqpk" event={"ID":"230dbe44-24bd-4a95-9f71-7ee36bb74cce","Type":"ContainerDied","Data":"d22cb237e69d1d81bb2cf37fadc3bb5651ed34bb7ad52432494b2ea7410d53d9"} Jan 29 16:25:47 crc kubenswrapper[4714]: I0129 16:25:47.206197 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s"] Jan 29 16:25:47 crc kubenswrapper[4714]: I0129 16:25:47.208404 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s" Jan 29 16:25:47 crc kubenswrapper[4714]: I0129 16:25:47.213336 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hwqbr" Jan 29 16:25:47 crc kubenswrapper[4714]: I0129 16:25:47.219880 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s"] Jan 29 16:25:47 crc kubenswrapper[4714]: I0129 16:25:47.305715 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/74070831-862a-4d0a-83b0-4e3d64891601-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s\" (UID: \"74070831-862a-4d0a-83b0-4e3d64891601\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s" Jan 29 16:25:47 crc kubenswrapper[4714]: I0129 16:25:47.305801 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/74070831-862a-4d0a-83b0-4e3d64891601-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s\" (UID: \"74070831-862a-4d0a-83b0-4e3d64891601\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s" Jan 29 16:25:47 crc kubenswrapper[4714]: I0129 16:25:47.305860 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwwng\" (UniqueName: \"kubernetes.io/projected/74070831-862a-4d0a-83b0-4e3d64891601-kube-api-access-dwwng\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s\" (UID: \"74070831-862a-4d0a-83b0-4e3d64891601\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s" Jan 29 16:25:47 crc kubenswrapper[4714]: I0129 16:25:47.407413 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwwng\" (UniqueName: \"kubernetes.io/projected/74070831-862a-4d0a-83b0-4e3d64891601-kube-api-access-dwwng\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s\" (UID: \"74070831-862a-4d0a-83b0-4e3d64891601\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s" Jan 29 16:25:47 crc kubenswrapper[4714]: I0129 16:25:47.407732 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/74070831-862a-4d0a-83b0-4e3d64891601-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s\" (UID: \"74070831-862a-4d0a-83b0-4e3d64891601\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s" Jan 29 16:25:47 crc kubenswrapper[4714]: I0129 16:25:47.407766 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/74070831-862a-4d0a-83b0-4e3d64891601-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s\" (UID: \"74070831-862a-4d0a-83b0-4e3d64891601\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s" Jan 29 16:25:47 crc kubenswrapper[4714]: I0129 16:25:47.408217 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/74070831-862a-4d0a-83b0-4e3d64891601-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s\" (UID: \"74070831-862a-4d0a-83b0-4e3d64891601\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s" Jan 29 16:25:47 crc kubenswrapper[4714]: I0129 16:25:47.408251 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/74070831-862a-4d0a-83b0-4e3d64891601-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s\" (UID: \"74070831-862a-4d0a-83b0-4e3d64891601\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s" Jan 29 16:25:47 crc kubenswrapper[4714]: I0129 16:25:47.426301 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwwng\" (UniqueName: \"kubernetes.io/projected/74070831-862a-4d0a-83b0-4e3d64891601-kube-api-access-dwwng\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s\" (UID: \"74070831-862a-4d0a-83b0-4e3d64891601\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s" Jan 29 16:25:47 crc kubenswrapper[4714]: I0129 16:25:47.526751 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s" Jan 29 16:25:47 crc kubenswrapper[4714]: I0129 16:25:47.727806 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmqpk" event={"ID":"230dbe44-24bd-4a95-9f71-7ee36bb74cce","Type":"ContainerStarted","Data":"486c03c8de207535cc686dfcaa8ec86e8491b149b1078748b5ef7e236bd5cbd3"} Jan 29 16:25:47 crc kubenswrapper[4714]: I0129 16:25:47.743720 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lmqpk" podStartSLOduration=2.201220656 podStartE2EDuration="8.743697709s" podCreationTimestamp="2026-01-29 16:25:39 +0000 UTC" firstStartedPulling="2026-01-29 16:25:40.678004475 +0000 UTC m=+947.198505595" lastFinishedPulling="2026-01-29 16:25:47.220481528 +0000 UTC m=+953.740982648" observedRunningTime="2026-01-29 16:25:47.741448953 +0000 UTC m=+954.261950073" watchObservedRunningTime="2026-01-29 16:25:47.743697709 +0000 UTC m=+954.264198829" Jan 29 16:25:47 crc kubenswrapper[4714]: I0129 16:25:47.773924 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/openstack-galera-2" Jan 29 16:25:47 crc kubenswrapper[4714]: I0129 16:25:47.849485 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/openstack-galera-2" Jan 29 16:25:47 crc kubenswrapper[4714]: I0129 16:25:47.954950 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s"] Jan 29 16:25:48 crc kubenswrapper[4714]: I0129 16:25:48.734694 4714 generic.go:334] "Generic (PLEG): container finished" podID="74070831-862a-4d0a-83b0-4e3d64891601" containerID="08be91d5ade94d67396b725df7d3290e5e0b4eed8f678b830b5a24bf0aefb822" exitCode=0 Jan 29 16:25:48 crc kubenswrapper[4714]: I0129 16:25:48.734794 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s" event={"ID":"74070831-862a-4d0a-83b0-4e3d64891601","Type":"ContainerDied","Data":"08be91d5ade94d67396b725df7d3290e5e0b4eed8f678b830b5a24bf0aefb822"} Jan 29 16:25:48 crc kubenswrapper[4714]: I0129 16:25:48.734843 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s" event={"ID":"74070831-862a-4d0a-83b0-4e3d64891601","Type":"ContainerStarted","Data":"52b0391a8d42deb2c4b8c1aff420808b658d4de8de74c3e4fd16389cd7b50f5b"} Jan 29 16:25:49 crc kubenswrapper[4714]: I0129 16:25:49.471471 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lmqpk" Jan 29 16:25:49 crc kubenswrapper[4714]: I0129 16:25:49.471580 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lmqpk" Jan 29 16:25:49 crc kubenswrapper[4714]: I0129 16:25:49.524567 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lmqpk" Jan 29 16:25:50 crc kubenswrapper[4714]: I0129 16:25:50.747649 4714 generic.go:334] "Generic (PLEG): container finished" podID="74070831-862a-4d0a-83b0-4e3d64891601" containerID="9a1454161993efe4d3d18d4c054f92025122ffa043acc6e49d820a2c93adec47" exitCode=0 Jan 29 16:25:50 crc kubenswrapper[4714]: I0129 16:25:50.747810 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s" event={"ID":"74070831-862a-4d0a-83b0-4e3d64891601","Type":"ContainerDied","Data":"9a1454161993efe4d3d18d4c054f92025122ffa043acc6e49d820a2c93adec47"} Jan 29 16:25:51 crc kubenswrapper[4714]: I0129 16:25:51.757860 4714 generic.go:334] "Generic (PLEG): container finished" podID="74070831-862a-4d0a-83b0-4e3d64891601" containerID="bf336482d3003324aec4b339b20443d78b6e477a15e7d92b43bd42b82e826811" exitCode=0 Jan 29 16:25:51 crc kubenswrapper[4714]: I0129 16:25:51.757987 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s" event={"ID":"74070831-862a-4d0a-83b0-4e3d64891601","Type":"ContainerDied","Data":"bf336482d3003324aec4b339b20443d78b6e477a15e7d92b43bd42b82e826811"} Jan 29 16:25:53 crc kubenswrapper[4714]: I0129 16:25:53.113005 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s" Jan 29 16:25:53 crc kubenswrapper[4714]: I0129 16:25:53.190658 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/74070831-862a-4d0a-83b0-4e3d64891601-bundle\") pod \"74070831-862a-4d0a-83b0-4e3d64891601\" (UID: \"74070831-862a-4d0a-83b0-4e3d64891601\") " Jan 29 16:25:53 crc kubenswrapper[4714]: I0129 16:25:53.190698 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/74070831-862a-4d0a-83b0-4e3d64891601-util\") pod \"74070831-862a-4d0a-83b0-4e3d64891601\" (UID: \"74070831-862a-4d0a-83b0-4e3d64891601\") " Jan 29 16:25:53 crc kubenswrapper[4714]: I0129 16:25:53.190737 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwwng\" (UniqueName: \"kubernetes.io/projected/74070831-862a-4d0a-83b0-4e3d64891601-kube-api-access-dwwng\") pod \"74070831-862a-4d0a-83b0-4e3d64891601\" (UID: \"74070831-862a-4d0a-83b0-4e3d64891601\") " Jan 29 16:25:53 crc kubenswrapper[4714]: I0129 16:25:53.191549 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74070831-862a-4d0a-83b0-4e3d64891601-bundle" (OuterVolumeSpecName: "bundle") pod "74070831-862a-4d0a-83b0-4e3d64891601" (UID: "74070831-862a-4d0a-83b0-4e3d64891601"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:25:53 crc kubenswrapper[4714]: I0129 16:25:53.198037 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74070831-862a-4d0a-83b0-4e3d64891601-kube-api-access-dwwng" (OuterVolumeSpecName: "kube-api-access-dwwng") pod "74070831-862a-4d0a-83b0-4e3d64891601" (UID: "74070831-862a-4d0a-83b0-4e3d64891601"). InnerVolumeSpecName "kube-api-access-dwwng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:25:53 crc kubenswrapper[4714]: I0129 16:25:53.209996 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74070831-862a-4d0a-83b0-4e3d64891601-util" (OuterVolumeSpecName: "util") pod "74070831-862a-4d0a-83b0-4e3d64891601" (UID: "74070831-862a-4d0a-83b0-4e3d64891601"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:25:53 crc kubenswrapper[4714]: I0129 16:25:53.292578 4714 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/74070831-862a-4d0a-83b0-4e3d64891601-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:25:53 crc kubenswrapper[4714]: I0129 16:25:53.292623 4714 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/74070831-862a-4d0a-83b0-4e3d64891601-util\") on node \"crc\" DevicePath \"\"" Jan 29 16:25:53 crc kubenswrapper[4714]: I0129 16:25:53.292633 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwwng\" (UniqueName: \"kubernetes.io/projected/74070831-862a-4d0a-83b0-4e3d64891601-kube-api-access-dwwng\") on node \"crc\" DevicePath \"\"" Jan 29 16:25:53 crc kubenswrapper[4714]: I0129 16:25:53.778281 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s" event={"ID":"74070831-862a-4d0a-83b0-4e3d64891601","Type":"ContainerDied","Data":"52b0391a8d42deb2c4b8c1aff420808b658d4de8de74c3e4fd16389cd7b50f5b"} Jan 29 16:25:53 crc kubenswrapper[4714]: I0129 16:25:53.778323 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52b0391a8d42deb2c4b8c1aff420808b658d4de8de74c3e4fd16389cd7b50f5b" Jan 29 16:25:53 crc kubenswrapper[4714]: I0129 16:25:53.778364 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s" Jan 29 16:25:54 crc kubenswrapper[4714]: I0129 16:25:54.955706 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wdwq5"] Jan 29 16:25:54 crc kubenswrapper[4714]: E0129 16:25:54.956128 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74070831-862a-4d0a-83b0-4e3d64891601" containerName="util" Jan 29 16:25:54 crc kubenswrapper[4714]: I0129 16:25:54.956156 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="74070831-862a-4d0a-83b0-4e3d64891601" containerName="util" Jan 29 16:25:54 crc kubenswrapper[4714]: E0129 16:25:54.956178 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74070831-862a-4d0a-83b0-4e3d64891601" containerName="pull" Jan 29 16:25:54 crc kubenswrapper[4714]: I0129 16:25:54.956188 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="74070831-862a-4d0a-83b0-4e3d64891601" containerName="pull" Jan 29 16:25:54 crc kubenswrapper[4714]: E0129 16:25:54.956216 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74070831-862a-4d0a-83b0-4e3d64891601" containerName="extract" Jan 29 16:25:54 crc kubenswrapper[4714]: I0129 16:25:54.956227 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="74070831-862a-4d0a-83b0-4e3d64891601" containerName="extract" Jan 29 16:25:54 crc kubenswrapper[4714]: I0129 16:25:54.956421 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="74070831-862a-4d0a-83b0-4e3d64891601" containerName="extract" Jan 29 16:25:54 crc kubenswrapper[4714]: I0129 16:25:54.957535 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wdwq5" Jan 29 16:25:54 crc kubenswrapper[4714]: I0129 16:25:54.968872 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wdwq5"] Jan 29 16:25:55 crc kubenswrapper[4714]: I0129 16:25:55.116641 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmm88\" (UniqueName: \"kubernetes.io/projected/8c12ad14-f878-42a1-a168-bad4026ec2dd-kube-api-access-tmm88\") pod \"redhat-marketplace-wdwq5\" (UID: \"8c12ad14-f878-42a1-a168-bad4026ec2dd\") " pod="openshift-marketplace/redhat-marketplace-wdwq5" Jan 29 16:25:55 crc kubenswrapper[4714]: I0129 16:25:55.116786 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c12ad14-f878-42a1-a168-bad4026ec2dd-utilities\") pod \"redhat-marketplace-wdwq5\" (UID: \"8c12ad14-f878-42a1-a168-bad4026ec2dd\") " pod="openshift-marketplace/redhat-marketplace-wdwq5" Jan 29 16:25:55 crc kubenswrapper[4714]: I0129 16:25:55.116856 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c12ad14-f878-42a1-a168-bad4026ec2dd-catalog-content\") pod \"redhat-marketplace-wdwq5\" (UID: \"8c12ad14-f878-42a1-a168-bad4026ec2dd\") " pod="openshift-marketplace/redhat-marketplace-wdwq5" Jan 29 16:25:55 crc kubenswrapper[4714]: I0129 16:25:55.218061 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c12ad14-f878-42a1-a168-bad4026ec2dd-catalog-content\") pod \"redhat-marketplace-wdwq5\" (UID: \"8c12ad14-f878-42a1-a168-bad4026ec2dd\") " pod="openshift-marketplace/redhat-marketplace-wdwq5" Jan 29 16:25:55 crc kubenswrapper[4714]: I0129 16:25:55.218118 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmm88\" (UniqueName: \"kubernetes.io/projected/8c12ad14-f878-42a1-a168-bad4026ec2dd-kube-api-access-tmm88\") pod \"redhat-marketplace-wdwq5\" (UID: \"8c12ad14-f878-42a1-a168-bad4026ec2dd\") " pod="openshift-marketplace/redhat-marketplace-wdwq5" Jan 29 16:25:55 crc kubenswrapper[4714]: I0129 16:25:55.218182 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c12ad14-f878-42a1-a168-bad4026ec2dd-utilities\") pod \"redhat-marketplace-wdwq5\" (UID: \"8c12ad14-f878-42a1-a168-bad4026ec2dd\") " pod="openshift-marketplace/redhat-marketplace-wdwq5" Jan 29 16:25:55 crc kubenswrapper[4714]: I0129 16:25:55.218668 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c12ad14-f878-42a1-a168-bad4026ec2dd-utilities\") pod \"redhat-marketplace-wdwq5\" (UID: \"8c12ad14-f878-42a1-a168-bad4026ec2dd\") " pod="openshift-marketplace/redhat-marketplace-wdwq5" Jan 29 16:25:55 crc kubenswrapper[4714]: I0129 16:25:55.218878 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c12ad14-f878-42a1-a168-bad4026ec2dd-catalog-content\") pod \"redhat-marketplace-wdwq5\" (UID: \"8c12ad14-f878-42a1-a168-bad4026ec2dd\") " pod="openshift-marketplace/redhat-marketplace-wdwq5" Jan 29 16:25:55 crc kubenswrapper[4714]: I0129 16:25:55.238025 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmm88\" (UniqueName: \"kubernetes.io/projected/8c12ad14-f878-42a1-a168-bad4026ec2dd-kube-api-access-tmm88\") pod \"redhat-marketplace-wdwq5\" (UID: \"8c12ad14-f878-42a1-a168-bad4026ec2dd\") " pod="openshift-marketplace/redhat-marketplace-wdwq5" Jan 29 16:25:55 crc kubenswrapper[4714]: I0129 16:25:55.273607 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wdwq5" Jan 29 16:25:55 crc kubenswrapper[4714]: I0129 16:25:55.723256 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wdwq5"] Jan 29 16:25:55 crc kubenswrapper[4714]: W0129 16:25:55.730415 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c12ad14_f878_42a1_a168_bad4026ec2dd.slice/crio-e9d23b1dec5222eaf00f0f2fac6279153030320fe12205a5c55c774a975165f4 WatchSource:0}: Error finding container e9d23b1dec5222eaf00f0f2fac6279153030320fe12205a5c55c774a975165f4: Status 404 returned error can't find the container with id e9d23b1dec5222eaf00f0f2fac6279153030320fe12205a5c55c774a975165f4 Jan 29 16:25:55 crc kubenswrapper[4714]: I0129 16:25:55.795129 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wdwq5" event={"ID":"8c12ad14-f878-42a1-a168-bad4026ec2dd","Type":"ContainerStarted","Data":"e9d23b1dec5222eaf00f0f2fac6279153030320fe12205a5c55c774a975165f4"} Jan 29 16:25:56 crc kubenswrapper[4714]: I0129 16:25:56.365051 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/root-account-create-update-2fh2r"] Jan 29 16:25:56 crc kubenswrapper[4714]: I0129 16:25:56.365754 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-2fh2r" Jan 29 16:25:56 crc kubenswrapper[4714]: I0129 16:25:56.368521 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"openstack-mariadb-root-db-secret" Jan 29 16:25:56 crc kubenswrapper[4714]: I0129 16:25:56.381968 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-2fh2r"] Jan 29 16:25:56 crc kubenswrapper[4714]: I0129 16:25:56.443546 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn2vl\" (UniqueName: \"kubernetes.io/projected/c94e6f97-6224-46d2-b406-f5d02a596cb7-kube-api-access-jn2vl\") pod \"root-account-create-update-2fh2r\" (UID: \"c94e6f97-6224-46d2-b406-f5d02a596cb7\") " pod="cinder-kuttl-tests/root-account-create-update-2fh2r" Jan 29 16:25:56 crc kubenswrapper[4714]: I0129 16:25:56.444196 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c94e6f97-6224-46d2-b406-f5d02a596cb7-operator-scripts\") pod \"root-account-create-update-2fh2r\" (UID: \"c94e6f97-6224-46d2-b406-f5d02a596cb7\") " pod="cinder-kuttl-tests/root-account-create-update-2fh2r" Jan 29 16:25:56 crc kubenswrapper[4714]: I0129 16:25:56.545601 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn2vl\" (UniqueName: \"kubernetes.io/projected/c94e6f97-6224-46d2-b406-f5d02a596cb7-kube-api-access-jn2vl\") pod \"root-account-create-update-2fh2r\" (UID: \"c94e6f97-6224-46d2-b406-f5d02a596cb7\") " pod="cinder-kuttl-tests/root-account-create-update-2fh2r" Jan 29 16:25:56 crc kubenswrapper[4714]: I0129 16:25:56.545702 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c94e6f97-6224-46d2-b406-f5d02a596cb7-operator-scripts\") pod \"root-account-create-update-2fh2r\" (UID: \"c94e6f97-6224-46d2-b406-f5d02a596cb7\") " pod="cinder-kuttl-tests/root-account-create-update-2fh2r" Jan 29 16:25:56 crc kubenswrapper[4714]: I0129 16:25:56.546397 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c94e6f97-6224-46d2-b406-f5d02a596cb7-operator-scripts\") pod \"root-account-create-update-2fh2r\" (UID: \"c94e6f97-6224-46d2-b406-f5d02a596cb7\") " pod="cinder-kuttl-tests/root-account-create-update-2fh2r" Jan 29 16:25:56 crc kubenswrapper[4714]: I0129 16:25:56.571905 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn2vl\" (UniqueName: \"kubernetes.io/projected/c94e6f97-6224-46d2-b406-f5d02a596cb7-kube-api-access-jn2vl\") pod \"root-account-create-update-2fh2r\" (UID: \"c94e6f97-6224-46d2-b406-f5d02a596cb7\") " pod="cinder-kuttl-tests/root-account-create-update-2fh2r" Jan 29 16:25:56 crc kubenswrapper[4714]: I0129 16:25:56.681264 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-2fh2r" Jan 29 16:25:56 crc kubenswrapper[4714]: I0129 16:25:56.805644 4714 generic.go:334] "Generic (PLEG): container finished" podID="8c12ad14-f878-42a1-a168-bad4026ec2dd" containerID="efa3dccd677eb8ead685e31efa78ca9b7337f378a73a59452840851ca329f89c" exitCode=0 Jan 29 16:25:56 crc kubenswrapper[4714]: I0129 16:25:56.806017 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wdwq5" event={"ID":"8c12ad14-f878-42a1-a168-bad4026ec2dd","Type":"ContainerDied","Data":"efa3dccd677eb8ead685e31efa78ca9b7337f378a73a59452840851ca329f89c"} Jan 29 16:25:57 crc kubenswrapper[4714]: I0129 16:25:57.133437 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-2fh2r"] Jan 29 16:25:57 crc kubenswrapper[4714]: E0129 16:25:57.458750 4714 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:25:57 crc kubenswrapper[4714]: E0129 16:25:57.459272 4714 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tmm88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wdwq5_openshift-marketplace(8c12ad14-f878-42a1-a168-bad4026ec2dd): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:25:57 crc kubenswrapper[4714]: E0129 16:25:57.461320 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:25:57 crc kubenswrapper[4714]: I0129 16:25:57.728448 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="cinder-kuttl-tests/openstack-galera-2" podUID="e367e739-45d9-4c71-82fa-ecda02da3277" containerName="galera" probeResult="failure" output=< Jan 29 16:25:57 crc kubenswrapper[4714]: wsrep_local_state_comment (Donor/Desynced) differs from Synced Jan 29 16:25:57 crc kubenswrapper[4714]: > Jan 29 16:25:57 crc kubenswrapper[4714]: I0129 16:25:57.822381 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/root-account-create-update-2fh2r" event={"ID":"c94e6f97-6224-46d2-b406-f5d02a596cb7","Type":"ContainerStarted","Data":"ae1e8fd69fe054dba679bc3d816ff2486311c7fd767f49bb3c77b8a2f9da9054"} Jan 29 16:25:57 crc kubenswrapper[4714]: I0129 16:25:57.822662 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/root-account-create-update-2fh2r" event={"ID":"c94e6f97-6224-46d2-b406-f5d02a596cb7","Type":"ContainerStarted","Data":"21ee325ce54c33324918e25586a05eb462c8b87e131df00ae9fd89812d284160"} Jan 29 16:25:57 crc kubenswrapper[4714]: E0129 16:25:57.823953 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:25:57 crc kubenswrapper[4714]: I0129 16:25:57.889763 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/root-account-create-update-2fh2r" podStartSLOduration=1.889744627 podStartE2EDuration="1.889744627s" podCreationTimestamp="2026-01-29 16:25:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:25:57.886582265 +0000 UTC m=+964.407083385" watchObservedRunningTime="2026-01-29 16:25:57.889744627 +0000 UTC m=+964.410245747" Jan 29 16:25:59 crc kubenswrapper[4714]: I0129 16:25:59.512270 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lmqpk" Jan 29 16:25:59 crc kubenswrapper[4714]: I0129 16:25:59.836662 4714 generic.go:334] "Generic (PLEG): container finished" podID="c94e6f97-6224-46d2-b406-f5d02a596cb7" containerID="ae1e8fd69fe054dba679bc3d816ff2486311c7fd767f49bb3c77b8a2f9da9054" exitCode=0 Jan 29 16:25:59 crc kubenswrapper[4714]: I0129 16:25:59.836715 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/root-account-create-update-2fh2r" event={"ID":"c94e6f97-6224-46d2-b406-f5d02a596cb7","Type":"ContainerDied","Data":"ae1e8fd69fe054dba679bc3d816ff2486311c7fd767f49bb3c77b8a2f9da9054"} Jan 29 16:26:00 crc kubenswrapper[4714]: I0129 16:26:00.123489 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/openstack-galera-1" Jan 29 16:26:00 crc kubenswrapper[4714]: I0129 16:26:00.219161 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/openstack-galera-1" Jan 29 16:26:01 crc kubenswrapper[4714]: I0129 16:26:01.175046 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-2fh2r" Jan 29 16:26:01 crc kubenswrapper[4714]: I0129 16:26:01.334327 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jn2vl\" (UniqueName: \"kubernetes.io/projected/c94e6f97-6224-46d2-b406-f5d02a596cb7-kube-api-access-jn2vl\") pod \"c94e6f97-6224-46d2-b406-f5d02a596cb7\" (UID: \"c94e6f97-6224-46d2-b406-f5d02a596cb7\") " Jan 29 16:26:01 crc kubenswrapper[4714]: I0129 16:26:01.334516 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c94e6f97-6224-46d2-b406-f5d02a596cb7-operator-scripts\") pod \"c94e6f97-6224-46d2-b406-f5d02a596cb7\" (UID: \"c94e6f97-6224-46d2-b406-f5d02a596cb7\") " Jan 29 16:26:01 crc kubenswrapper[4714]: I0129 16:26:01.335041 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c94e6f97-6224-46d2-b406-f5d02a596cb7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c94e6f97-6224-46d2-b406-f5d02a596cb7" (UID: "c94e6f97-6224-46d2-b406-f5d02a596cb7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:26:01 crc kubenswrapper[4714]: I0129 16:26:01.345606 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c94e6f97-6224-46d2-b406-f5d02a596cb7-kube-api-access-jn2vl" (OuterVolumeSpecName: "kube-api-access-jn2vl") pod "c94e6f97-6224-46d2-b406-f5d02a596cb7" (UID: "c94e6f97-6224-46d2-b406-f5d02a596cb7"). InnerVolumeSpecName "kube-api-access-jn2vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:26:01 crc kubenswrapper[4714]: I0129 16:26:01.436364 4714 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c94e6f97-6224-46d2-b406-f5d02a596cb7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:01 crc kubenswrapper[4714]: I0129 16:26:01.436448 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jn2vl\" (UniqueName: \"kubernetes.io/projected/c94e6f97-6224-46d2-b406-f5d02a596cb7-kube-api-access-jn2vl\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:01 crc kubenswrapper[4714]: I0129 16:26:01.850005 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/root-account-create-update-2fh2r" event={"ID":"c94e6f97-6224-46d2-b406-f5d02a596cb7","Type":"ContainerDied","Data":"21ee325ce54c33324918e25586a05eb462c8b87e131df00ae9fd89812d284160"} Jan 29 16:26:01 crc kubenswrapper[4714]: I0129 16:26:01.850052 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21ee325ce54c33324918e25586a05eb462c8b87e131df00ae9fd89812d284160" Jan 29 16:26:01 crc kubenswrapper[4714]: I0129 16:26:01.850055 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-2fh2r" Jan 29 16:26:02 crc kubenswrapper[4714]: I0129 16:26:02.221656 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/openstack-galera-0" Jan 29 16:26:02 crc kubenswrapper[4714]: I0129 16:26:02.339134 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/openstack-galera-0" Jan 29 16:26:04 crc kubenswrapper[4714]: I0129 16:26:04.342708 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lmqpk"] Jan 29 16:26:04 crc kubenswrapper[4714]: I0129 16:26:04.343069 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lmqpk" podUID="230dbe44-24bd-4a95-9f71-7ee36bb74cce" containerName="registry-server" containerID="cri-o://486c03c8de207535cc686dfcaa8ec86e8491b149b1078748b5ef7e236bd5cbd3" gracePeriod=2 Jan 29 16:26:04 crc kubenswrapper[4714]: I0129 16:26:04.873161 4714 generic.go:334] "Generic (PLEG): container finished" podID="230dbe44-24bd-4a95-9f71-7ee36bb74cce" containerID="486c03c8de207535cc686dfcaa8ec86e8491b149b1078748b5ef7e236bd5cbd3" exitCode=0 Jan 29 16:26:04 crc kubenswrapper[4714]: I0129 16:26:04.873223 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmqpk" event={"ID":"230dbe44-24bd-4a95-9f71-7ee36bb74cce","Type":"ContainerDied","Data":"486c03c8de207535cc686dfcaa8ec86e8491b149b1078748b5ef7e236bd5cbd3"} Jan 29 16:26:05 crc kubenswrapper[4714]: I0129 16:26:05.325242 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lmqpk" Jan 29 16:26:05 crc kubenswrapper[4714]: I0129 16:26:05.392183 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/230dbe44-24bd-4a95-9f71-7ee36bb74cce-utilities\") pod \"230dbe44-24bd-4a95-9f71-7ee36bb74cce\" (UID: \"230dbe44-24bd-4a95-9f71-7ee36bb74cce\") " Jan 29 16:26:05 crc kubenswrapper[4714]: I0129 16:26:05.392839 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fppdq\" (UniqueName: \"kubernetes.io/projected/230dbe44-24bd-4a95-9f71-7ee36bb74cce-kube-api-access-fppdq\") pod \"230dbe44-24bd-4a95-9f71-7ee36bb74cce\" (UID: \"230dbe44-24bd-4a95-9f71-7ee36bb74cce\") " Jan 29 16:26:05 crc kubenswrapper[4714]: I0129 16:26:05.393070 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/230dbe44-24bd-4a95-9f71-7ee36bb74cce-catalog-content\") pod \"230dbe44-24bd-4a95-9f71-7ee36bb74cce\" (UID: \"230dbe44-24bd-4a95-9f71-7ee36bb74cce\") " Jan 29 16:26:05 crc kubenswrapper[4714]: I0129 16:26:05.393188 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/230dbe44-24bd-4a95-9f71-7ee36bb74cce-utilities" (OuterVolumeSpecName: "utilities") pod "230dbe44-24bd-4a95-9f71-7ee36bb74cce" (UID: "230dbe44-24bd-4a95-9f71-7ee36bb74cce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:26:05 crc kubenswrapper[4714]: I0129 16:26:05.400454 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/230dbe44-24bd-4a95-9f71-7ee36bb74cce-kube-api-access-fppdq" (OuterVolumeSpecName: "kube-api-access-fppdq") pod "230dbe44-24bd-4a95-9f71-7ee36bb74cce" (UID: "230dbe44-24bd-4a95-9f71-7ee36bb74cce"). InnerVolumeSpecName "kube-api-access-fppdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:26:05 crc kubenswrapper[4714]: I0129 16:26:05.445747 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/230dbe44-24bd-4a95-9f71-7ee36bb74cce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "230dbe44-24bd-4a95-9f71-7ee36bb74cce" (UID: "230dbe44-24bd-4a95-9f71-7ee36bb74cce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:26:05 crc kubenswrapper[4714]: I0129 16:26:05.495308 4714 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/230dbe44-24bd-4a95-9f71-7ee36bb74cce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:05 crc kubenswrapper[4714]: I0129 16:26:05.495344 4714 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/230dbe44-24bd-4a95-9f71-7ee36bb74cce-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:05 crc kubenswrapper[4714]: I0129 16:26:05.495354 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fppdq\" (UniqueName: \"kubernetes.io/projected/230dbe44-24bd-4a95-9f71-7ee36bb74cce-kube-api-access-fppdq\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:05 crc kubenswrapper[4714]: I0129 16:26:05.881039 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmqpk" event={"ID":"230dbe44-24bd-4a95-9f71-7ee36bb74cce","Type":"ContainerDied","Data":"b963773a81de472dcc9c53a373aee4fe097bee647f3f7cf04a8f2ef907468156"} Jan 29 16:26:05 crc kubenswrapper[4714]: I0129 16:26:05.881095 4714 scope.go:117] "RemoveContainer" containerID="486c03c8de207535cc686dfcaa8ec86e8491b149b1078748b5ef7e236bd5cbd3" Jan 29 16:26:05 crc kubenswrapper[4714]: I0129 16:26:05.881105 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lmqpk" Jan 29 16:26:05 crc kubenswrapper[4714]: I0129 16:26:05.916524 4714 scope.go:117] "RemoveContainer" containerID="d22cb237e69d1d81bb2cf37fadc3bb5651ed34bb7ad52432494b2ea7410d53d9" Jan 29 16:26:05 crc kubenswrapper[4714]: I0129 16:26:05.917588 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lmqpk"] Jan 29 16:26:05 crc kubenswrapper[4714]: I0129 16:26:05.924701 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lmqpk"] Jan 29 16:26:05 crc kubenswrapper[4714]: I0129 16:26:05.937353 4714 scope.go:117] "RemoveContainer" containerID="5b730a9dfe53ac182298170cbbd5116f09f6dc7bc63b9980a70b7d90c7997752" Jan 29 16:26:06 crc kubenswrapper[4714]: I0129 16:26:06.195168 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="230dbe44-24bd-4a95-9f71-7ee36bb74cce" path="/var/lib/kubelet/pods/230dbe44-24bd-4a95-9f71-7ee36bb74cce/volumes" Jan 29 16:26:08 crc kubenswrapper[4714]: I0129 16:26:08.153477 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-xprqx"] Jan 29 16:26:08 crc kubenswrapper[4714]: E0129 16:26:08.153792 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="230dbe44-24bd-4a95-9f71-7ee36bb74cce" containerName="registry-server" Jan 29 16:26:08 crc kubenswrapper[4714]: I0129 16:26:08.153808 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="230dbe44-24bd-4a95-9f71-7ee36bb74cce" containerName="registry-server" Jan 29 16:26:08 crc kubenswrapper[4714]: E0129 16:26:08.153825 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="230dbe44-24bd-4a95-9f71-7ee36bb74cce" containerName="extract-utilities" Jan 29 16:26:08 crc kubenswrapper[4714]: I0129 16:26:08.153834 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="230dbe44-24bd-4a95-9f71-7ee36bb74cce" containerName="extract-utilities" Jan 29 16:26:08 crc kubenswrapper[4714]: E0129 16:26:08.153852 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c94e6f97-6224-46d2-b406-f5d02a596cb7" containerName="mariadb-account-create-update" Jan 29 16:26:08 crc kubenswrapper[4714]: I0129 16:26:08.153860 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="c94e6f97-6224-46d2-b406-f5d02a596cb7" containerName="mariadb-account-create-update" Jan 29 16:26:08 crc kubenswrapper[4714]: E0129 16:26:08.153873 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="230dbe44-24bd-4a95-9f71-7ee36bb74cce" containerName="extract-content" Jan 29 16:26:08 crc kubenswrapper[4714]: I0129 16:26:08.153881 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="230dbe44-24bd-4a95-9f71-7ee36bb74cce" containerName="extract-content" Jan 29 16:26:08 crc kubenswrapper[4714]: I0129 16:26:08.154043 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="c94e6f97-6224-46d2-b406-f5d02a596cb7" containerName="mariadb-account-create-update" Jan 29 16:26:08 crc kubenswrapper[4714]: I0129 16:26:08.154062 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="230dbe44-24bd-4a95-9f71-7ee36bb74cce" containerName="registry-server" Jan 29 16:26:08 crc kubenswrapper[4714]: I0129 16:26:08.154597 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-xprqx" Jan 29 16:26:08 crc kubenswrapper[4714]: I0129 16:26:08.159976 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-dockercfg-d5fnd" Jan 29 16:26:08 crc kubenswrapper[4714]: I0129 16:26:08.171434 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-xprqx"] Jan 29 16:26:08 crc kubenswrapper[4714]: I0129 16:26:08.240463 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2fkn\" (UniqueName: \"kubernetes.io/projected/12c4aba1-9cb7-4f12-bbd5-fee3ccfa0c88-kube-api-access-p2fkn\") pod \"rabbitmq-cluster-operator-779fc9694b-xprqx\" (UID: \"12c4aba1-9cb7-4f12-bbd5-fee3ccfa0c88\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-xprqx" Jan 29 16:26:08 crc kubenswrapper[4714]: I0129 16:26:08.342121 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2fkn\" (UniqueName: \"kubernetes.io/projected/12c4aba1-9cb7-4f12-bbd5-fee3ccfa0c88-kube-api-access-p2fkn\") pod \"rabbitmq-cluster-operator-779fc9694b-xprqx\" (UID: \"12c4aba1-9cb7-4f12-bbd5-fee3ccfa0c88\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-xprqx" Jan 29 16:26:08 crc kubenswrapper[4714]: I0129 16:26:08.361017 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2fkn\" (UniqueName: \"kubernetes.io/projected/12c4aba1-9cb7-4f12-bbd5-fee3ccfa0c88-kube-api-access-p2fkn\") pod \"rabbitmq-cluster-operator-779fc9694b-xprqx\" (UID: \"12c4aba1-9cb7-4f12-bbd5-fee3ccfa0c88\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-xprqx" Jan 29 16:26:08 crc kubenswrapper[4714]: I0129 16:26:08.486771 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-xprqx" Jan 29 16:26:08 crc kubenswrapper[4714]: I0129 16:26:08.938628 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-xprqx"] Jan 29 16:26:09 crc kubenswrapper[4714]: I0129 16:26:09.919830 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-xprqx" event={"ID":"12c4aba1-9cb7-4f12-bbd5-fee3ccfa0c88","Type":"ContainerStarted","Data":"9cef1a74877ad1a74c47eaad2e3c11a8681670f01de34b07c07c5549def07a12"} Jan 29 16:26:11 crc kubenswrapper[4714]: E0129 16:26:11.568307 4714 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:26:11 crc kubenswrapper[4714]: E0129 16:26:11.569403 4714 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tmm88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wdwq5_openshift-marketplace(8c12ad14-f878-42a1-a168-bad4026ec2dd): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:26:11 crc kubenswrapper[4714]: E0129 16:26:11.570662 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:26:12 crc kubenswrapper[4714]: I0129 16:26:12.943163 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-xprqx" event={"ID":"12c4aba1-9cb7-4f12-bbd5-fee3ccfa0c88","Type":"ContainerStarted","Data":"0c6b038c874feac00a0ea4a05fbd431f7014a4ed8fc71291c23ec972425e646d"} Jan 29 16:26:12 crc kubenswrapper[4714]: I0129 16:26:12.965972 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-xprqx" podStartSLOduration=1.9700317840000001 podStartE2EDuration="4.965894368s" podCreationTimestamp="2026-01-29 16:26:08 +0000 UTC" firstStartedPulling="2026-01-29 16:26:08.948000927 +0000 UTC m=+975.468502047" lastFinishedPulling="2026-01-29 16:26:11.943863511 +0000 UTC m=+978.464364631" observedRunningTime="2026-01-29 16:26:12.964282593 +0000 UTC m=+979.484783783" watchObservedRunningTime="2026-01-29 16:26:12.965894368 +0000 UTC m=+979.486395528" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.740287 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.741681 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.743430 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"rabbitmq-server-dockercfg-4pzhl" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.743546 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"rabbitmq-plugins-conf" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.744189 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"rabbitmq-erlang-cookie" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.744266 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"rabbitmq-server-conf" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.744445 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"rabbitmq-default-user" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.751840 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.854997 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pgjf\" (UniqueName: \"kubernetes.io/projected/55e23ac1-a89b-4689-a17d-bee875f7783e-kube-api-access-8pgjf\") pod \"rabbitmq-server-0\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.855045 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/55e23ac1-a89b-4689-a17d-bee875f7783e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.855128 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/55e23ac1-a89b-4689-a17d-bee875f7783e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.855209 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/55e23ac1-a89b-4689-a17d-bee875f7783e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.855337 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/55e23ac1-a89b-4689-a17d-bee875f7783e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.855358 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/55e23ac1-a89b-4689-a17d-bee875f7783e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.855379 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/55e23ac1-a89b-4689-a17d-bee875f7783e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.855416 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-21c03d9a-bbed-44e7-9b38-49f8915c54d2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21c03d9a-bbed-44e7-9b38-49f8915c54d2\") pod \"rabbitmq-server-0\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.956490 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/55e23ac1-a89b-4689-a17d-bee875f7783e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.956568 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/55e23ac1-a89b-4689-a17d-bee875f7783e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.956653 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/55e23ac1-a89b-4689-a17d-bee875f7783e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.956687 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/55e23ac1-a89b-4689-a17d-bee875f7783e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.956718 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/55e23ac1-a89b-4689-a17d-bee875f7783e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.956757 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-21c03d9a-bbed-44e7-9b38-49f8915c54d2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21c03d9a-bbed-44e7-9b38-49f8915c54d2\") pod \"rabbitmq-server-0\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.956810 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pgjf\" (UniqueName: \"kubernetes.io/projected/55e23ac1-a89b-4689-a17d-bee875f7783e-kube-api-access-8pgjf\") pod \"rabbitmq-server-0\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.956845 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/55e23ac1-a89b-4689-a17d-bee875f7783e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.958049 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/55e23ac1-a89b-4689-a17d-bee875f7783e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.959554 4714 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.959591 4714 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-21c03d9a-bbed-44e7-9b38-49f8915c54d2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21c03d9a-bbed-44e7-9b38-49f8915c54d2\") pod \"rabbitmq-server-0\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b382ee18eee6fccf46656a68ff47f48e9aa5ccd13f3cf6bd6d751a5365eb3cfb/globalmount\"" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.959886 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/55e23ac1-a89b-4689-a17d-bee875f7783e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.960309 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/55e23ac1-a89b-4689-a17d-bee875f7783e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.965678 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/55e23ac1-a89b-4689-a17d-bee875f7783e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.965677 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/55e23ac1-a89b-4689-a17d-bee875f7783e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.971780 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/55e23ac1-a89b-4689-a17d-bee875f7783e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.976203 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pgjf\" (UniqueName: \"kubernetes.io/projected/55e23ac1-a89b-4689-a17d-bee875f7783e-kube-api-access-8pgjf\") pod \"rabbitmq-server-0\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:15 crc kubenswrapper[4714]: I0129 16:26:15.996383 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-21c03d9a-bbed-44e7-9b38-49f8915c54d2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21c03d9a-bbed-44e7-9b38-49f8915c54d2\") pod \"rabbitmq-server-0\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:16 crc kubenswrapper[4714]: I0129 16:26:16.091085 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:16 crc kubenswrapper[4714]: I0129 16:26:16.514965 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 29 16:26:16 crc kubenswrapper[4714]: W0129 16:26:16.520229 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55e23ac1_a89b_4689_a17d_bee875f7783e.slice/crio-299eb5904909cd50412aad25a871f9888d290b3d4f1acfe53103c96e6f05a1bc WatchSource:0}: Error finding container 299eb5904909cd50412aad25a871f9888d290b3d4f1acfe53103c96e6f05a1bc: Status 404 returned error can't find the container with id 299eb5904909cd50412aad25a871f9888d290b3d4f1acfe53103c96e6f05a1bc Jan 29 16:26:16 crc kubenswrapper[4714]: I0129 16:26:16.967118 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/rabbitmq-server-0" event={"ID":"55e23ac1-a89b-4689-a17d-bee875f7783e","Type":"ContainerStarted","Data":"299eb5904909cd50412aad25a871f9888d290b3d4f1acfe53103c96e6f05a1bc"} Jan 29 16:26:18 crc kubenswrapper[4714]: I0129 16:26:18.352305 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-index-vtc5h"] Jan 29 16:26:18 crc kubenswrapper[4714]: I0129 16:26:18.354856 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-vtc5h" Jan 29 16:26:18 crc kubenswrapper[4714]: I0129 16:26:18.358106 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-index-dockercfg-klkzg" Jan 29 16:26:18 crc kubenswrapper[4714]: I0129 16:26:18.361804 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-vtc5h"] Jan 29 16:26:18 crc kubenswrapper[4714]: I0129 16:26:18.509896 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsc27\" (UniqueName: \"kubernetes.io/projected/668764e7-6295-4275-bcc9-24b680ec685f-kube-api-access-wsc27\") pod \"keystone-operator-index-vtc5h\" (UID: \"668764e7-6295-4275-bcc9-24b680ec685f\") " pod="openstack-operators/keystone-operator-index-vtc5h" Jan 29 16:26:18 crc kubenswrapper[4714]: I0129 16:26:18.610661 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsc27\" (UniqueName: \"kubernetes.io/projected/668764e7-6295-4275-bcc9-24b680ec685f-kube-api-access-wsc27\") pod \"keystone-operator-index-vtc5h\" (UID: \"668764e7-6295-4275-bcc9-24b680ec685f\") " pod="openstack-operators/keystone-operator-index-vtc5h" Jan 29 16:26:18 crc kubenswrapper[4714]: I0129 16:26:18.637442 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsc27\" (UniqueName: \"kubernetes.io/projected/668764e7-6295-4275-bcc9-24b680ec685f-kube-api-access-wsc27\") pod \"keystone-operator-index-vtc5h\" (UID: \"668764e7-6295-4275-bcc9-24b680ec685f\") " pod="openstack-operators/keystone-operator-index-vtc5h" Jan 29 16:26:18 crc kubenswrapper[4714]: I0129 16:26:18.723582 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-vtc5h" Jan 29 16:26:18 crc kubenswrapper[4714]: I0129 16:26:18.763063 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w58tg"] Jan 29 16:26:18 crc kubenswrapper[4714]: I0129 16:26:18.765084 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w58tg" Jan 29 16:26:18 crc kubenswrapper[4714]: I0129 16:26:18.770775 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w58tg"] Jan 29 16:26:18 crc kubenswrapper[4714]: I0129 16:26:18.914060 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db1a5ea4-712c-4ec8-aeae-c44998fa51e1-catalog-content\") pod \"certified-operators-w58tg\" (UID: \"db1a5ea4-712c-4ec8-aeae-c44998fa51e1\") " pod="openshift-marketplace/certified-operators-w58tg" Jan 29 16:26:18 crc kubenswrapper[4714]: I0129 16:26:18.914152 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db1a5ea4-712c-4ec8-aeae-c44998fa51e1-utilities\") pod \"certified-operators-w58tg\" (UID: \"db1a5ea4-712c-4ec8-aeae-c44998fa51e1\") " pod="openshift-marketplace/certified-operators-w58tg" Jan 29 16:26:18 crc kubenswrapper[4714]: I0129 16:26:18.914188 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn67x\" (UniqueName: \"kubernetes.io/projected/db1a5ea4-712c-4ec8-aeae-c44998fa51e1-kube-api-access-rn67x\") pod \"certified-operators-w58tg\" (UID: \"db1a5ea4-712c-4ec8-aeae-c44998fa51e1\") " pod="openshift-marketplace/certified-operators-w58tg" Jan 29 16:26:19 crc kubenswrapper[4714]: I0129 16:26:19.015244 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db1a5ea4-712c-4ec8-aeae-c44998fa51e1-utilities\") pod \"certified-operators-w58tg\" (UID: \"db1a5ea4-712c-4ec8-aeae-c44998fa51e1\") " pod="openshift-marketplace/certified-operators-w58tg" Jan 29 16:26:19 crc kubenswrapper[4714]: I0129 16:26:19.015290 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn67x\" (UniqueName: \"kubernetes.io/projected/db1a5ea4-712c-4ec8-aeae-c44998fa51e1-kube-api-access-rn67x\") pod \"certified-operators-w58tg\" (UID: \"db1a5ea4-712c-4ec8-aeae-c44998fa51e1\") " pod="openshift-marketplace/certified-operators-w58tg" Jan 29 16:26:19 crc kubenswrapper[4714]: I0129 16:26:19.015374 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db1a5ea4-712c-4ec8-aeae-c44998fa51e1-catalog-content\") pod \"certified-operators-w58tg\" (UID: \"db1a5ea4-712c-4ec8-aeae-c44998fa51e1\") " pod="openshift-marketplace/certified-operators-w58tg" Jan 29 16:26:19 crc kubenswrapper[4714]: I0129 16:26:19.015909 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db1a5ea4-712c-4ec8-aeae-c44998fa51e1-utilities\") pod \"certified-operators-w58tg\" (UID: \"db1a5ea4-712c-4ec8-aeae-c44998fa51e1\") " pod="openshift-marketplace/certified-operators-w58tg" Jan 29 16:26:19 crc kubenswrapper[4714]: I0129 16:26:19.015959 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db1a5ea4-712c-4ec8-aeae-c44998fa51e1-catalog-content\") pod \"certified-operators-w58tg\" (UID: \"db1a5ea4-712c-4ec8-aeae-c44998fa51e1\") " pod="openshift-marketplace/certified-operators-w58tg" Jan 29 16:26:19 crc kubenswrapper[4714]: I0129 16:26:19.032772 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn67x\" (UniqueName: \"kubernetes.io/projected/db1a5ea4-712c-4ec8-aeae-c44998fa51e1-kube-api-access-rn67x\") pod \"certified-operators-w58tg\" (UID: \"db1a5ea4-712c-4ec8-aeae-c44998fa51e1\") " pod="openshift-marketplace/certified-operators-w58tg" Jan 29 16:26:19 crc kubenswrapper[4714]: I0129 16:26:19.133010 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w58tg" Jan 29 16:26:19 crc kubenswrapper[4714]: I0129 16:26:19.182262 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-vtc5h"] Jan 29 16:26:19 crc kubenswrapper[4714]: W0129 16:26:19.192006 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod668764e7_6295_4275_bcc9_24b680ec685f.slice/crio-047c3adf9e0c960c54099b8f9a0a168467b6e29a2c3acff719ceb9bbe1f69c79 WatchSource:0}: Error finding container 047c3adf9e0c960c54099b8f9a0a168467b6e29a2c3acff719ceb9bbe1f69c79: Status 404 returned error can't find the container with id 047c3adf9e0c960c54099b8f9a0a168467b6e29a2c3acff719ceb9bbe1f69c79 Jan 29 16:26:19 crc kubenswrapper[4714]: I0129 16:26:19.566806 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w58tg"] Jan 29 16:26:19 crc kubenswrapper[4714]: I0129 16:26:19.987170 4714 generic.go:334] "Generic (PLEG): container finished" podID="db1a5ea4-712c-4ec8-aeae-c44998fa51e1" containerID="977047a32f415a72ab936afe2d1774146105933da01176d2aabeb71559b4afad" exitCode=0 Jan 29 16:26:19 crc kubenswrapper[4714]: I0129 16:26:19.987263 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w58tg" event={"ID":"db1a5ea4-712c-4ec8-aeae-c44998fa51e1","Type":"ContainerDied","Data":"977047a32f415a72ab936afe2d1774146105933da01176d2aabeb71559b4afad"} Jan 29 16:26:19 crc kubenswrapper[4714]: I0129 16:26:19.987486 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w58tg" event={"ID":"db1a5ea4-712c-4ec8-aeae-c44998fa51e1","Type":"ContainerStarted","Data":"fa3e0fa56a351fb40b73e185e0120625630ac7fd4a18cf99fdfb008b5ae7fadf"} Jan 29 16:26:19 crc kubenswrapper[4714]: I0129 16:26:19.989049 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-vtc5h" event={"ID":"668764e7-6295-4275-bcc9-24b680ec685f","Type":"ContainerStarted","Data":"047c3adf9e0c960c54099b8f9a0a168467b6e29a2c3acff719ceb9bbe1f69c79"} Jan 29 16:26:20 crc kubenswrapper[4714]: E0129 16:26:20.131227 4714 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:26:20 crc kubenswrapper[4714]: E0129 16:26:20.131479 4714 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rn67x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-w58tg_openshift-marketplace(db1a5ea4-712c-4ec8-aeae-c44998fa51e1): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:26:20 crc kubenswrapper[4714]: E0129 16:26:20.132666 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-w58tg" podUID="db1a5ea4-712c-4ec8-aeae-c44998fa51e1" Jan 29 16:26:20 crc kubenswrapper[4714]: I0129 16:26:20.998071 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-vtc5h" event={"ID":"668764e7-6295-4275-bcc9-24b680ec685f","Type":"ContainerStarted","Data":"dc243f10cd3a85e410a8968c999f4a42fc022a8b377403e54fe6779138d67aa8"} Jan 29 16:26:21 crc kubenswrapper[4714]: I0129 16:26:21.029626 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-index-vtc5h" podStartSLOduration=2.06601316 podStartE2EDuration="3.029607142s" podCreationTimestamp="2026-01-29 16:26:18 +0000 UTC" firstStartedPulling="2026-01-29 16:26:19.197171482 +0000 UTC m=+985.717672612" lastFinishedPulling="2026-01-29 16:26:20.160765474 +0000 UTC m=+986.681266594" observedRunningTime="2026-01-29 16:26:21.025909219 +0000 UTC m=+987.546410339" watchObservedRunningTime="2026-01-29 16:26:21.029607142 +0000 UTC m=+987.550108262" Jan 29 16:26:21 crc kubenswrapper[4714]: E0129 16:26:21.740692 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-w58tg" podUID="db1a5ea4-712c-4ec8-aeae-c44998fa51e1" Jan 29 16:26:23 crc kubenswrapper[4714]: E0129 16:26:23.207454 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:26:27 crc kubenswrapper[4714]: I0129 16:26:27.045018 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/rabbitmq-server-0" event={"ID":"55e23ac1-a89b-4689-a17d-bee875f7783e","Type":"ContainerStarted","Data":"eb95fbc4965d0aeffea6fce3a32742ee3a1bc5446fd33b5341ed6d4be042d493"} Jan 29 16:26:27 crc kubenswrapper[4714]: I0129 16:26:27.844764 4714 patch_prober.go:28] interesting pod/machine-config-daemon-ppngk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:26:27 crc kubenswrapper[4714]: I0129 16:26:27.845123 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:26:28 crc kubenswrapper[4714]: I0129 16:26:28.725043 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/keystone-operator-index-vtc5h" Jan 29 16:26:28 crc kubenswrapper[4714]: I0129 16:26:28.725083 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-index-vtc5h" Jan 29 16:26:28 crc kubenswrapper[4714]: I0129 16:26:28.756802 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/keystone-operator-index-vtc5h" Jan 29 16:26:29 crc kubenswrapper[4714]: I0129 16:26:29.082747 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-index-vtc5h" Jan 29 16:26:30 crc kubenswrapper[4714]: I0129 16:26:30.791826 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg"] Jan 29 16:26:30 crc kubenswrapper[4714]: I0129 16:26:30.793175 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg" Jan 29 16:26:30 crc kubenswrapper[4714]: I0129 16:26:30.840521 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hwqbr" Jan 29 16:26:30 crc kubenswrapper[4714]: I0129 16:26:30.855288 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg"] Jan 29 16:26:30 crc kubenswrapper[4714]: I0129 16:26:30.888591 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd598a7a-34ba-4392-908b-c18d89648bb5-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg\" (UID: \"dd598a7a-34ba-4392-908b-c18d89648bb5\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg" Jan 29 16:26:30 crc kubenswrapper[4714]: I0129 16:26:30.888642 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlxp5\" (UniqueName: \"kubernetes.io/projected/dd598a7a-34ba-4392-908b-c18d89648bb5-kube-api-access-wlxp5\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg\" (UID: \"dd598a7a-34ba-4392-908b-c18d89648bb5\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg" Jan 29 16:26:30 crc kubenswrapper[4714]: I0129 16:26:30.888875 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd598a7a-34ba-4392-908b-c18d89648bb5-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg\" (UID: \"dd598a7a-34ba-4392-908b-c18d89648bb5\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg" Jan 29 16:26:30 crc kubenswrapper[4714]: I0129 16:26:30.990275 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd598a7a-34ba-4392-908b-c18d89648bb5-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg\" (UID: \"dd598a7a-34ba-4392-908b-c18d89648bb5\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg" Jan 29 16:26:30 crc kubenswrapper[4714]: I0129 16:26:30.991233 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd598a7a-34ba-4392-908b-c18d89648bb5-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg\" (UID: \"dd598a7a-34ba-4392-908b-c18d89648bb5\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg" Jan 29 16:26:30 crc kubenswrapper[4714]: I0129 16:26:30.991343 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd598a7a-34ba-4392-908b-c18d89648bb5-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg\" (UID: \"dd598a7a-34ba-4392-908b-c18d89648bb5\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg" Jan 29 16:26:30 crc kubenswrapper[4714]: I0129 16:26:30.990575 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd598a7a-34ba-4392-908b-c18d89648bb5-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg\" (UID: \"dd598a7a-34ba-4392-908b-c18d89648bb5\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg" Jan 29 16:26:30 crc kubenswrapper[4714]: I0129 16:26:30.991494 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlxp5\" (UniqueName: \"kubernetes.io/projected/dd598a7a-34ba-4392-908b-c18d89648bb5-kube-api-access-wlxp5\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg\" (UID: \"dd598a7a-34ba-4392-908b-c18d89648bb5\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg" Jan 29 16:26:31 crc kubenswrapper[4714]: I0129 16:26:31.011309 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlxp5\" (UniqueName: \"kubernetes.io/projected/dd598a7a-34ba-4392-908b-c18d89648bb5-kube-api-access-wlxp5\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg\" (UID: \"dd598a7a-34ba-4392-908b-c18d89648bb5\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg" Jan 29 16:26:31 crc kubenswrapper[4714]: I0129 16:26:31.163857 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg" Jan 29 16:26:31 crc kubenswrapper[4714]: I0129 16:26:31.543395 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg"] Jan 29 16:26:32 crc kubenswrapper[4714]: I0129 16:26:32.074609 4714 generic.go:334] "Generic (PLEG): container finished" podID="dd598a7a-34ba-4392-908b-c18d89648bb5" containerID="cdd9e5087dbe100b4200dc045cc6536b7bd5644b604c1ad48cd724f12116a2d5" exitCode=0 Jan 29 16:26:32 crc kubenswrapper[4714]: I0129 16:26:32.074722 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg" event={"ID":"dd598a7a-34ba-4392-908b-c18d89648bb5","Type":"ContainerDied","Data":"cdd9e5087dbe100b4200dc045cc6536b7bd5644b604c1ad48cd724f12116a2d5"} Jan 29 16:26:32 crc kubenswrapper[4714]: I0129 16:26:32.074990 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg" event={"ID":"dd598a7a-34ba-4392-908b-c18d89648bb5","Type":"ContainerStarted","Data":"2c5ce017fddcc26d58ff7fc5c31e1e866cf3cb41da47bd4a24dc52b2e88e55e7"} Jan 29 16:26:35 crc kubenswrapper[4714]: I0129 16:26:35.093636 4714 generic.go:334] "Generic (PLEG): container finished" podID="dd598a7a-34ba-4392-908b-c18d89648bb5" containerID="a293414bfe6fecbd34f2097ce525abe6d50f764495aa3e4545c1f2cdb4d889ff" exitCode=0 Jan 29 16:26:35 crc kubenswrapper[4714]: I0129 16:26:35.093730 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg" event={"ID":"dd598a7a-34ba-4392-908b-c18d89648bb5","Type":"ContainerDied","Data":"a293414bfe6fecbd34f2097ce525abe6d50f764495aa3e4545c1f2cdb4d889ff"} Jan 29 16:26:35 crc kubenswrapper[4714]: E0129 16:26:35.309637 4714 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:26:35 crc kubenswrapper[4714]: E0129 16:26:35.310108 4714 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rn67x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-w58tg_openshift-marketplace(db1a5ea4-712c-4ec8-aeae-c44998fa51e1): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:26:35 crc kubenswrapper[4714]: E0129 16:26:35.311305 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-w58tg" podUID="db1a5ea4-712c-4ec8-aeae-c44998fa51e1" Jan 29 16:26:36 crc kubenswrapper[4714]: E0129 16:26:36.311050 4714 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:26:36 crc kubenswrapper[4714]: E0129 16:26:36.311237 4714 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tmm88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wdwq5_openshift-marketplace(8c12ad14-f878-42a1-a168-bad4026ec2dd): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:26:36 crc kubenswrapper[4714]: E0129 16:26:36.312761 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:26:37 crc kubenswrapper[4714]: I0129 16:26:37.111524 4714 generic.go:334] "Generic (PLEG): container finished" podID="dd598a7a-34ba-4392-908b-c18d89648bb5" containerID="e9b290e5fae1b9ebd91874f0c7f54baf70c50604b0924e9b333424187e1578aa" exitCode=0 Jan 29 16:26:37 crc kubenswrapper[4714]: I0129 16:26:37.111691 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg" event={"ID":"dd598a7a-34ba-4392-908b-c18d89648bb5","Type":"ContainerDied","Data":"e9b290e5fae1b9ebd91874f0c7f54baf70c50604b0924e9b333424187e1578aa"} Jan 29 16:26:38 crc kubenswrapper[4714]: I0129 16:26:38.445273 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg" Jan 29 16:26:38 crc kubenswrapper[4714]: I0129 16:26:38.498257 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlxp5\" (UniqueName: \"kubernetes.io/projected/dd598a7a-34ba-4392-908b-c18d89648bb5-kube-api-access-wlxp5\") pod \"dd598a7a-34ba-4392-908b-c18d89648bb5\" (UID: \"dd598a7a-34ba-4392-908b-c18d89648bb5\") " Jan 29 16:26:38 crc kubenswrapper[4714]: I0129 16:26:38.498328 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd598a7a-34ba-4392-908b-c18d89648bb5-util\") pod \"dd598a7a-34ba-4392-908b-c18d89648bb5\" (UID: \"dd598a7a-34ba-4392-908b-c18d89648bb5\") " Jan 29 16:26:38 crc kubenswrapper[4714]: I0129 16:26:38.498376 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd598a7a-34ba-4392-908b-c18d89648bb5-bundle\") pod \"dd598a7a-34ba-4392-908b-c18d89648bb5\" (UID: \"dd598a7a-34ba-4392-908b-c18d89648bb5\") " Jan 29 16:26:38 crc kubenswrapper[4714]: I0129 16:26:38.499511 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd598a7a-34ba-4392-908b-c18d89648bb5-bundle" (OuterVolumeSpecName: "bundle") pod "dd598a7a-34ba-4392-908b-c18d89648bb5" (UID: "dd598a7a-34ba-4392-908b-c18d89648bb5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:26:38 crc kubenswrapper[4714]: I0129 16:26:38.503779 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd598a7a-34ba-4392-908b-c18d89648bb5-kube-api-access-wlxp5" (OuterVolumeSpecName: "kube-api-access-wlxp5") pod "dd598a7a-34ba-4392-908b-c18d89648bb5" (UID: "dd598a7a-34ba-4392-908b-c18d89648bb5"). InnerVolumeSpecName "kube-api-access-wlxp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:26:38 crc kubenswrapper[4714]: I0129 16:26:38.509094 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd598a7a-34ba-4392-908b-c18d89648bb5-util" (OuterVolumeSpecName: "util") pod "dd598a7a-34ba-4392-908b-c18d89648bb5" (UID: "dd598a7a-34ba-4392-908b-c18d89648bb5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:26:38 crc kubenswrapper[4714]: I0129 16:26:38.600453 4714 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd598a7a-34ba-4392-908b-c18d89648bb5-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:38 crc kubenswrapper[4714]: I0129 16:26:38.600501 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlxp5\" (UniqueName: \"kubernetes.io/projected/dd598a7a-34ba-4392-908b-c18d89648bb5-kube-api-access-wlxp5\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:38 crc kubenswrapper[4714]: I0129 16:26:38.600513 4714 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd598a7a-34ba-4392-908b-c18d89648bb5-util\") on node \"crc\" DevicePath \"\"" Jan 29 16:26:39 crc kubenswrapper[4714]: I0129 16:26:39.135308 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg" event={"ID":"dd598a7a-34ba-4392-908b-c18d89648bb5","Type":"ContainerDied","Data":"2c5ce017fddcc26d58ff7fc5c31e1e866cf3cb41da47bd4a24dc52b2e88e55e7"} Jan 29 16:26:39 crc kubenswrapper[4714]: I0129 16:26:39.135393 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c5ce017fddcc26d58ff7fc5c31e1e866cf3cb41da47bd4a24dc52b2e88e55e7" Jan 29 16:26:39 crc kubenswrapper[4714]: I0129 16:26:39.135522 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg" Jan 29 16:26:49 crc kubenswrapper[4714]: E0129 16:26:49.188413 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-w58tg" podUID="db1a5ea4-712c-4ec8-aeae-c44998fa51e1" Jan 29 16:26:49 crc kubenswrapper[4714]: E0129 16:26:49.188413 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:26:49 crc kubenswrapper[4714]: I0129 16:26:49.349326 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5b97656f4c-wwx28"] Jan 29 16:26:49 crc kubenswrapper[4714]: E0129 16:26:49.349558 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd598a7a-34ba-4392-908b-c18d89648bb5" containerName="util" Jan 29 16:26:49 crc kubenswrapper[4714]: I0129 16:26:49.349570 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd598a7a-34ba-4392-908b-c18d89648bb5" containerName="util" Jan 29 16:26:49 crc kubenswrapper[4714]: E0129 16:26:49.349584 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd598a7a-34ba-4392-908b-c18d89648bb5" containerName="pull" Jan 29 16:26:49 crc kubenswrapper[4714]: I0129 16:26:49.349589 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd598a7a-34ba-4392-908b-c18d89648bb5" containerName="pull" Jan 29 16:26:49 crc kubenswrapper[4714]: E0129 16:26:49.349601 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd598a7a-34ba-4392-908b-c18d89648bb5" containerName="extract" Jan 29 16:26:49 crc kubenswrapper[4714]: I0129 16:26:49.349607 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd598a7a-34ba-4392-908b-c18d89648bb5" containerName="extract" Jan 29 16:26:49 crc kubenswrapper[4714]: I0129 16:26:49.349703 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd598a7a-34ba-4392-908b-c18d89648bb5" containerName="extract" Jan 29 16:26:49 crc kubenswrapper[4714]: I0129 16:26:49.350105 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-5b97656f4c-wwx28" Jan 29 16:26:49 crc kubenswrapper[4714]: I0129 16:26:49.351826 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-m5648" Jan 29 16:26:49 crc kubenswrapper[4714]: I0129 16:26:49.351857 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-service-cert" Jan 29 16:26:49 crc kubenswrapper[4714]: I0129 16:26:49.364061 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5b97656f4c-wwx28"] Jan 29 16:26:49 crc kubenswrapper[4714]: I0129 16:26:49.499104 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5d602ee5-4171-4dc7-9852-88c6019696e1-apiservice-cert\") pod \"keystone-operator-controller-manager-5b97656f4c-wwx28\" (UID: \"5d602ee5-4171-4dc7-9852-88c6019696e1\") " pod="openstack-operators/keystone-operator-controller-manager-5b97656f4c-wwx28" Jan 29 16:26:49 crc kubenswrapper[4714]: I0129 16:26:49.499177 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5d602ee5-4171-4dc7-9852-88c6019696e1-webhook-cert\") pod \"keystone-operator-controller-manager-5b97656f4c-wwx28\" (UID: \"5d602ee5-4171-4dc7-9852-88c6019696e1\") " pod="openstack-operators/keystone-operator-controller-manager-5b97656f4c-wwx28" Jan 29 16:26:49 crc kubenswrapper[4714]: I0129 16:26:49.499442 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7sdp\" (UniqueName: \"kubernetes.io/projected/5d602ee5-4171-4dc7-9852-88c6019696e1-kube-api-access-p7sdp\") pod \"keystone-operator-controller-manager-5b97656f4c-wwx28\" (UID: \"5d602ee5-4171-4dc7-9852-88c6019696e1\") " pod="openstack-operators/keystone-operator-controller-manager-5b97656f4c-wwx28" Jan 29 16:26:49 crc kubenswrapper[4714]: I0129 16:26:49.600584 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5d602ee5-4171-4dc7-9852-88c6019696e1-webhook-cert\") pod \"keystone-operator-controller-manager-5b97656f4c-wwx28\" (UID: \"5d602ee5-4171-4dc7-9852-88c6019696e1\") " pod="openstack-operators/keystone-operator-controller-manager-5b97656f4c-wwx28" Jan 29 16:26:49 crc kubenswrapper[4714]: I0129 16:26:49.600674 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7sdp\" (UniqueName: \"kubernetes.io/projected/5d602ee5-4171-4dc7-9852-88c6019696e1-kube-api-access-p7sdp\") pod \"keystone-operator-controller-manager-5b97656f4c-wwx28\" (UID: \"5d602ee5-4171-4dc7-9852-88c6019696e1\") " pod="openstack-operators/keystone-operator-controller-manager-5b97656f4c-wwx28" Jan 29 16:26:49 crc kubenswrapper[4714]: I0129 16:26:49.600725 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5d602ee5-4171-4dc7-9852-88c6019696e1-apiservice-cert\") pod \"keystone-operator-controller-manager-5b97656f4c-wwx28\" (UID: \"5d602ee5-4171-4dc7-9852-88c6019696e1\") " pod="openstack-operators/keystone-operator-controller-manager-5b97656f4c-wwx28" Jan 29 16:26:49 crc kubenswrapper[4714]: I0129 16:26:49.607761 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5d602ee5-4171-4dc7-9852-88c6019696e1-webhook-cert\") pod \"keystone-operator-controller-manager-5b97656f4c-wwx28\" (UID: \"5d602ee5-4171-4dc7-9852-88c6019696e1\") " pod="openstack-operators/keystone-operator-controller-manager-5b97656f4c-wwx28" Jan 29 16:26:49 crc kubenswrapper[4714]: I0129 16:26:49.609762 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5d602ee5-4171-4dc7-9852-88c6019696e1-apiservice-cert\") pod \"keystone-operator-controller-manager-5b97656f4c-wwx28\" (UID: \"5d602ee5-4171-4dc7-9852-88c6019696e1\") " pod="openstack-operators/keystone-operator-controller-manager-5b97656f4c-wwx28" Jan 29 16:26:49 crc kubenswrapper[4714]: I0129 16:26:49.621981 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7sdp\" (UniqueName: \"kubernetes.io/projected/5d602ee5-4171-4dc7-9852-88c6019696e1-kube-api-access-p7sdp\") pod \"keystone-operator-controller-manager-5b97656f4c-wwx28\" (UID: \"5d602ee5-4171-4dc7-9852-88c6019696e1\") " pod="openstack-operators/keystone-operator-controller-manager-5b97656f4c-wwx28" Jan 29 16:26:49 crc kubenswrapper[4714]: I0129 16:26:49.666593 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-5b97656f4c-wwx28" Jan 29 16:26:49 crc kubenswrapper[4714]: I0129 16:26:49.918003 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5b97656f4c-wwx28"] Jan 29 16:26:49 crc kubenswrapper[4714]: W0129 16:26:49.924352 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d602ee5_4171_4dc7_9852_88c6019696e1.slice/crio-68e4148d363b3ff81741fb86667cb2d686f613c7db6f1f53e639c87301659f49 WatchSource:0}: Error finding container 68e4148d363b3ff81741fb86667cb2d686f613c7db6f1f53e639c87301659f49: Status 404 returned error can't find the container with id 68e4148d363b3ff81741fb86667cb2d686f613c7db6f1f53e639c87301659f49 Jan 29 16:26:50 crc kubenswrapper[4714]: I0129 16:26:50.219897 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5b97656f4c-wwx28" event={"ID":"5d602ee5-4171-4dc7-9852-88c6019696e1","Type":"ContainerStarted","Data":"68e4148d363b3ff81741fb86667cb2d686f613c7db6f1f53e639c87301659f49"} Jan 29 16:26:55 crc kubenswrapper[4714]: I0129 16:26:55.262686 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5b97656f4c-wwx28" event={"ID":"5d602ee5-4171-4dc7-9852-88c6019696e1","Type":"ContainerStarted","Data":"0389615fc2e1a33814fa46fa1999a6f01822d1c7cd6bb84e89e87e7102d3acc5"} Jan 29 16:26:55 crc kubenswrapper[4714]: I0129 16:26:55.263207 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-5b97656f4c-wwx28" Jan 29 16:26:55 crc kubenswrapper[4714]: I0129 16:26:55.286529 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-5b97656f4c-wwx28" podStartSLOduration=1.760099205 podStartE2EDuration="6.28650597s" podCreationTimestamp="2026-01-29 16:26:49 +0000 UTC" firstStartedPulling="2026-01-29 16:26:49.927044747 +0000 UTC m=+1016.447545867" lastFinishedPulling="2026-01-29 16:26:54.453451522 +0000 UTC m=+1020.973952632" observedRunningTime="2026-01-29 16:26:55.27901753 +0000 UTC m=+1021.799518670" watchObservedRunningTime="2026-01-29 16:26:55.28650597 +0000 UTC m=+1021.807007090" Jan 29 16:26:57 crc kubenswrapper[4714]: I0129 16:26:57.844550 4714 patch_prober.go:28] interesting pod/machine-config-daemon-ppngk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:26:57 crc kubenswrapper[4714]: I0129 16:26:57.845577 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:26:58 crc kubenswrapper[4714]: I0129 16:26:58.296571 4714 generic.go:334] "Generic (PLEG): container finished" podID="55e23ac1-a89b-4689-a17d-bee875f7783e" containerID="eb95fbc4965d0aeffea6fce3a32742ee3a1bc5446fd33b5341ed6d4be042d493" exitCode=0 Jan 29 16:26:58 crc kubenswrapper[4714]: I0129 16:26:58.296638 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/rabbitmq-server-0" event={"ID":"55e23ac1-a89b-4689-a17d-bee875f7783e","Type":"ContainerDied","Data":"eb95fbc4965d0aeffea6fce3a32742ee3a1bc5446fd33b5341ed6d4be042d493"} Jan 29 16:26:59 crc kubenswrapper[4714]: I0129 16:26:59.306481 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/rabbitmq-server-0" event={"ID":"55e23ac1-a89b-4689-a17d-bee875f7783e","Type":"ContainerStarted","Data":"ee125ef7ce148229ad82b22d2a7578dd54b32a1f3c67998b7176056878a4676a"} Jan 29 16:26:59 crc kubenswrapper[4714]: I0129 16:26:59.306839 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:26:59 crc kubenswrapper[4714]: I0129 16:26:59.328794 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/rabbitmq-server-0" podStartSLOduration=36.688712317 podStartE2EDuration="45.328770933s" podCreationTimestamp="2026-01-29 16:26:14 +0000 UTC" firstStartedPulling="2026-01-29 16:26:16.523093735 +0000 UTC m=+983.043594855" lastFinishedPulling="2026-01-29 16:26:25.163152341 +0000 UTC m=+991.683653471" observedRunningTime="2026-01-29 16:26:59.327693223 +0000 UTC m=+1025.848194343" watchObservedRunningTime="2026-01-29 16:26:59.328770933 +0000 UTC m=+1025.849272073" Jan 29 16:26:59 crc kubenswrapper[4714]: I0129 16:26:59.672017 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-5b97656f4c-wwx28" Jan 29 16:27:01 crc kubenswrapper[4714]: E0129 16:27:01.187626 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:27:02 crc kubenswrapper[4714]: I0129 16:27:02.340753 4714 generic.go:334] "Generic (PLEG): container finished" podID="db1a5ea4-712c-4ec8-aeae-c44998fa51e1" containerID="4b864e9c0bb2f4364a8efa096fde9bcf1de20bf04e06b58bcc7485e4462f93c5" exitCode=0 Jan 29 16:27:02 crc kubenswrapper[4714]: I0129 16:27:02.340830 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w58tg" event={"ID":"db1a5ea4-712c-4ec8-aeae-c44998fa51e1","Type":"ContainerDied","Data":"4b864e9c0bb2f4364a8efa096fde9bcf1de20bf04e06b58bcc7485e4462f93c5"} Jan 29 16:27:03 crc kubenswrapper[4714]: I0129 16:27:03.348011 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w58tg" event={"ID":"db1a5ea4-712c-4ec8-aeae-c44998fa51e1","Type":"ContainerStarted","Data":"87939f973c741f22ef30a3e11e1a7e3cde2e6dfae61483c34dee22312eab0f29"} Jan 29 16:27:03 crc kubenswrapper[4714]: I0129 16:27:03.362537 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w58tg" podStartSLOduration=2.63832229 podStartE2EDuration="45.362516708s" podCreationTimestamp="2026-01-29 16:26:18 +0000 UTC" firstStartedPulling="2026-01-29 16:26:20.012068362 +0000 UTC m=+986.532569482" lastFinishedPulling="2026-01-29 16:27:02.73626278 +0000 UTC m=+1029.256763900" observedRunningTime="2026-01-29 16:27:03.361268913 +0000 UTC m=+1029.881770043" watchObservedRunningTime="2026-01-29 16:27:03.362516708 +0000 UTC m=+1029.883017828" Jan 29 16:27:09 crc kubenswrapper[4714]: I0129 16:27:09.134202 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w58tg" Jan 29 16:27:09 crc kubenswrapper[4714]: I0129 16:27:09.135374 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w58tg" Jan 29 16:27:09 crc kubenswrapper[4714]: I0129 16:27:09.191066 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-index-d7f6m"] Jan 29 16:27:09 crc kubenswrapper[4714]: I0129 16:27:09.205266 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-index-d7f6m" Jan 29 16:27:09 crc kubenswrapper[4714]: I0129 16:27:09.210722 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-index-dockercfg-tgxd8" Jan 29 16:27:09 crc kubenswrapper[4714]: I0129 16:27:09.216143 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-index-d7f6m"] Jan 29 16:27:09 crc kubenswrapper[4714]: I0129 16:27:09.222714 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w58tg" Jan 29 16:27:09 crc kubenswrapper[4714]: I0129 16:27:09.289031 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6lgj\" (UniqueName: \"kubernetes.io/projected/87506df3-b56a-4598-8309-e865dc93cf53-kube-api-access-m6lgj\") pod \"cinder-operator-index-d7f6m\" (UID: \"87506df3-b56a-4598-8309-e865dc93cf53\") " pod="openstack-operators/cinder-operator-index-d7f6m" Jan 29 16:27:09 crc kubenswrapper[4714]: I0129 16:27:09.392426 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6lgj\" (UniqueName: \"kubernetes.io/projected/87506df3-b56a-4598-8309-e865dc93cf53-kube-api-access-m6lgj\") pod \"cinder-operator-index-d7f6m\" (UID: \"87506df3-b56a-4598-8309-e865dc93cf53\") " pod="openstack-operators/cinder-operator-index-d7f6m" Jan 29 16:27:09 crc kubenswrapper[4714]: I0129 16:27:09.419738 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6lgj\" (UniqueName: \"kubernetes.io/projected/87506df3-b56a-4598-8309-e865dc93cf53-kube-api-access-m6lgj\") pod \"cinder-operator-index-d7f6m\" (UID: \"87506df3-b56a-4598-8309-e865dc93cf53\") " pod="openstack-operators/cinder-operator-index-d7f6m" Jan 29 16:27:09 crc kubenswrapper[4714]: I0129 16:27:09.448248 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w58tg" Jan 29 16:27:09 crc kubenswrapper[4714]: I0129 16:27:09.536631 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-index-d7f6m" Jan 29 16:27:09 crc kubenswrapper[4714]: I0129 16:27:09.952268 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-index-d7f6m"] Jan 29 16:27:10 crc kubenswrapper[4714]: I0129 16:27:10.396030 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-index-d7f6m" event={"ID":"87506df3-b56a-4598-8309-e865dc93cf53","Type":"ContainerStarted","Data":"6f87b469ba7b044e8c1285048b5bce1bfd385e6eeae34b8f824127e313741cf2"} Jan 29 16:27:12 crc kubenswrapper[4714]: E0129 16:27:12.194371 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:27:13 crc kubenswrapper[4714]: I0129 16:27:13.427156 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-index-d7f6m" event={"ID":"87506df3-b56a-4598-8309-e865dc93cf53","Type":"ContainerStarted","Data":"ddec768853957c5a9f807fefab5f4e26d2050966765bc9353326a1334ad4a549"} Jan 29 16:27:13 crc kubenswrapper[4714]: I0129 16:27:13.446227 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-index-d7f6m" podStartSLOduration=1.628648934 podStartE2EDuration="4.446193751s" podCreationTimestamp="2026-01-29 16:27:09 +0000 UTC" firstStartedPulling="2026-01-29 16:27:09.967845933 +0000 UTC m=+1036.488347053" lastFinishedPulling="2026-01-29 16:27:12.78539075 +0000 UTC m=+1039.305891870" observedRunningTime="2026-01-29 16:27:13.442675395 +0000 UTC m=+1039.963176535" watchObservedRunningTime="2026-01-29 16:27:13.446193751 +0000 UTC m=+1039.966694871" Jan 29 16:27:13 crc kubenswrapper[4714]: I0129 16:27:13.744055 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w58tg"] Jan 29 16:27:13 crc kubenswrapper[4714]: I0129 16:27:13.744304 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w58tg" podUID="db1a5ea4-712c-4ec8-aeae-c44998fa51e1" containerName="registry-server" containerID="cri-o://87939f973c741f22ef30a3e11e1a7e3cde2e6dfae61483c34dee22312eab0f29" gracePeriod=2 Jan 29 16:27:14 crc kubenswrapper[4714]: I0129 16:27:14.196191 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w58tg" Jan 29 16:27:14 crc kubenswrapper[4714]: I0129 16:27:14.276423 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db1a5ea4-712c-4ec8-aeae-c44998fa51e1-catalog-content\") pod \"db1a5ea4-712c-4ec8-aeae-c44998fa51e1\" (UID: \"db1a5ea4-712c-4ec8-aeae-c44998fa51e1\") " Jan 29 16:27:14 crc kubenswrapper[4714]: I0129 16:27:14.276503 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn67x\" (UniqueName: \"kubernetes.io/projected/db1a5ea4-712c-4ec8-aeae-c44998fa51e1-kube-api-access-rn67x\") pod \"db1a5ea4-712c-4ec8-aeae-c44998fa51e1\" (UID: \"db1a5ea4-712c-4ec8-aeae-c44998fa51e1\") " Jan 29 16:27:14 crc kubenswrapper[4714]: I0129 16:27:14.276562 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db1a5ea4-712c-4ec8-aeae-c44998fa51e1-utilities\") pod \"db1a5ea4-712c-4ec8-aeae-c44998fa51e1\" (UID: \"db1a5ea4-712c-4ec8-aeae-c44998fa51e1\") " Jan 29 16:27:14 crc kubenswrapper[4714]: I0129 16:27:14.277506 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db1a5ea4-712c-4ec8-aeae-c44998fa51e1-utilities" (OuterVolumeSpecName: "utilities") pod "db1a5ea4-712c-4ec8-aeae-c44998fa51e1" (UID: "db1a5ea4-712c-4ec8-aeae-c44998fa51e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:27:14 crc kubenswrapper[4714]: I0129 16:27:14.297097 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db1a5ea4-712c-4ec8-aeae-c44998fa51e1-kube-api-access-rn67x" (OuterVolumeSpecName: "kube-api-access-rn67x") pod "db1a5ea4-712c-4ec8-aeae-c44998fa51e1" (UID: "db1a5ea4-712c-4ec8-aeae-c44998fa51e1"). InnerVolumeSpecName "kube-api-access-rn67x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:27:14 crc kubenswrapper[4714]: I0129 16:27:14.328777 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db1a5ea4-712c-4ec8-aeae-c44998fa51e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db1a5ea4-712c-4ec8-aeae-c44998fa51e1" (UID: "db1a5ea4-712c-4ec8-aeae-c44998fa51e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:27:14 crc kubenswrapper[4714]: I0129 16:27:14.378137 4714 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db1a5ea4-712c-4ec8-aeae-c44998fa51e1-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:14 crc kubenswrapper[4714]: I0129 16:27:14.378176 4714 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db1a5ea4-712c-4ec8-aeae-c44998fa51e1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:14 crc kubenswrapper[4714]: I0129 16:27:14.378193 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn67x\" (UniqueName: \"kubernetes.io/projected/db1a5ea4-712c-4ec8-aeae-c44998fa51e1-kube-api-access-rn67x\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:14 crc kubenswrapper[4714]: I0129 16:27:14.435313 4714 generic.go:334] "Generic (PLEG): container finished" podID="db1a5ea4-712c-4ec8-aeae-c44998fa51e1" containerID="87939f973c741f22ef30a3e11e1a7e3cde2e6dfae61483c34dee22312eab0f29" exitCode=0 Jan 29 16:27:14 crc kubenswrapper[4714]: I0129 16:27:14.435381 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w58tg" event={"ID":"db1a5ea4-712c-4ec8-aeae-c44998fa51e1","Type":"ContainerDied","Data":"87939f973c741f22ef30a3e11e1a7e3cde2e6dfae61483c34dee22312eab0f29"} Jan 29 16:27:14 crc kubenswrapper[4714]: I0129 16:27:14.435434 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w58tg" event={"ID":"db1a5ea4-712c-4ec8-aeae-c44998fa51e1","Type":"ContainerDied","Data":"fa3e0fa56a351fb40b73e185e0120625630ac7fd4a18cf99fdfb008b5ae7fadf"} Jan 29 16:27:14 crc kubenswrapper[4714]: I0129 16:27:14.435435 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w58tg" Jan 29 16:27:14 crc kubenswrapper[4714]: I0129 16:27:14.435454 4714 scope.go:117] "RemoveContainer" containerID="87939f973c741f22ef30a3e11e1a7e3cde2e6dfae61483c34dee22312eab0f29" Jan 29 16:27:14 crc kubenswrapper[4714]: I0129 16:27:14.453242 4714 scope.go:117] "RemoveContainer" containerID="4b864e9c0bb2f4364a8efa096fde9bcf1de20bf04e06b58bcc7485e4462f93c5" Jan 29 16:27:14 crc kubenswrapper[4714]: I0129 16:27:14.461808 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w58tg"] Jan 29 16:27:14 crc kubenswrapper[4714]: I0129 16:27:14.468155 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w58tg"] Jan 29 16:27:14 crc kubenswrapper[4714]: I0129 16:27:14.529752 4714 scope.go:117] "RemoveContainer" containerID="977047a32f415a72ab936afe2d1774146105933da01176d2aabeb71559b4afad" Jan 29 16:27:14 crc kubenswrapper[4714]: I0129 16:27:14.545549 4714 scope.go:117] "RemoveContainer" containerID="87939f973c741f22ef30a3e11e1a7e3cde2e6dfae61483c34dee22312eab0f29" Jan 29 16:27:14 crc kubenswrapper[4714]: E0129 16:27:14.546033 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87939f973c741f22ef30a3e11e1a7e3cde2e6dfae61483c34dee22312eab0f29\": container with ID starting with 87939f973c741f22ef30a3e11e1a7e3cde2e6dfae61483c34dee22312eab0f29 not found: ID does not exist" containerID="87939f973c741f22ef30a3e11e1a7e3cde2e6dfae61483c34dee22312eab0f29" Jan 29 16:27:14 crc kubenswrapper[4714]: I0129 16:27:14.546084 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87939f973c741f22ef30a3e11e1a7e3cde2e6dfae61483c34dee22312eab0f29"} err="failed to get container status \"87939f973c741f22ef30a3e11e1a7e3cde2e6dfae61483c34dee22312eab0f29\": rpc error: code = NotFound desc = could not find container \"87939f973c741f22ef30a3e11e1a7e3cde2e6dfae61483c34dee22312eab0f29\": container with ID starting with 87939f973c741f22ef30a3e11e1a7e3cde2e6dfae61483c34dee22312eab0f29 not found: ID does not exist" Jan 29 16:27:14 crc kubenswrapper[4714]: I0129 16:27:14.546110 4714 scope.go:117] "RemoveContainer" containerID="4b864e9c0bb2f4364a8efa096fde9bcf1de20bf04e06b58bcc7485e4462f93c5" Jan 29 16:27:14 crc kubenswrapper[4714]: E0129 16:27:14.546664 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b864e9c0bb2f4364a8efa096fde9bcf1de20bf04e06b58bcc7485e4462f93c5\": container with ID starting with 4b864e9c0bb2f4364a8efa096fde9bcf1de20bf04e06b58bcc7485e4462f93c5 not found: ID does not exist" containerID="4b864e9c0bb2f4364a8efa096fde9bcf1de20bf04e06b58bcc7485e4462f93c5" Jan 29 16:27:14 crc kubenswrapper[4714]: I0129 16:27:14.546694 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b864e9c0bb2f4364a8efa096fde9bcf1de20bf04e06b58bcc7485e4462f93c5"} err="failed to get container status \"4b864e9c0bb2f4364a8efa096fde9bcf1de20bf04e06b58bcc7485e4462f93c5\": rpc error: code = NotFound desc = could not find container \"4b864e9c0bb2f4364a8efa096fde9bcf1de20bf04e06b58bcc7485e4462f93c5\": container with ID starting with 4b864e9c0bb2f4364a8efa096fde9bcf1de20bf04e06b58bcc7485e4462f93c5 not found: ID does not exist" Jan 29 16:27:14 crc kubenswrapper[4714]: I0129 16:27:14.546717 4714 scope.go:117] "RemoveContainer" containerID="977047a32f415a72ab936afe2d1774146105933da01176d2aabeb71559b4afad" Jan 29 16:27:14 crc kubenswrapper[4714]: E0129 16:27:14.547062 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"977047a32f415a72ab936afe2d1774146105933da01176d2aabeb71559b4afad\": container with ID starting with 977047a32f415a72ab936afe2d1774146105933da01176d2aabeb71559b4afad not found: ID does not exist" containerID="977047a32f415a72ab936afe2d1774146105933da01176d2aabeb71559b4afad" Jan 29 16:27:14 crc kubenswrapper[4714]: I0129 16:27:14.547080 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"977047a32f415a72ab936afe2d1774146105933da01176d2aabeb71559b4afad"} err="failed to get container status \"977047a32f415a72ab936afe2d1774146105933da01176d2aabeb71559b4afad\": rpc error: code = NotFound desc = could not find container \"977047a32f415a72ab936afe2d1774146105933da01176d2aabeb71559b4afad\": container with ID starting with 977047a32f415a72ab936afe2d1774146105933da01176d2aabeb71559b4afad not found: ID does not exist" Jan 29 16:27:15 crc kubenswrapper[4714]: I0129 16:27:15.362908 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/keystone-db-create-4hhfq"] Jan 29 16:27:15 crc kubenswrapper[4714]: E0129 16:27:15.363427 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db1a5ea4-712c-4ec8-aeae-c44998fa51e1" containerName="registry-server" Jan 29 16:27:15 crc kubenswrapper[4714]: I0129 16:27:15.363504 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="db1a5ea4-712c-4ec8-aeae-c44998fa51e1" containerName="registry-server" Jan 29 16:27:15 crc kubenswrapper[4714]: E0129 16:27:15.363587 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db1a5ea4-712c-4ec8-aeae-c44998fa51e1" containerName="extract-content" Jan 29 16:27:15 crc kubenswrapper[4714]: I0129 16:27:15.363643 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="db1a5ea4-712c-4ec8-aeae-c44998fa51e1" containerName="extract-content" Jan 29 16:27:15 crc kubenswrapper[4714]: E0129 16:27:15.363698 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db1a5ea4-712c-4ec8-aeae-c44998fa51e1" containerName="extract-utilities" Jan 29 16:27:15 crc kubenswrapper[4714]: I0129 16:27:15.363752 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="db1a5ea4-712c-4ec8-aeae-c44998fa51e1" containerName="extract-utilities" Jan 29 16:27:15 crc kubenswrapper[4714]: I0129 16:27:15.363913 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="db1a5ea4-712c-4ec8-aeae-c44998fa51e1" containerName="registry-server" Jan 29 16:27:15 crc kubenswrapper[4714]: I0129 16:27:15.364431 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-create-4hhfq" Jan 29 16:27:15 crc kubenswrapper[4714]: I0129 16:27:15.376741 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-db-create-4hhfq"] Jan 29 16:27:15 crc kubenswrapper[4714]: I0129 16:27:15.383084 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/keystone-52f2-account-create-update-5x6mg"] Jan 29 16:27:15 crc kubenswrapper[4714]: I0129 16:27:15.383915 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-52f2-account-create-update-5x6mg" Jan 29 16:27:15 crc kubenswrapper[4714]: I0129 16:27:15.387045 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-db-secret" Jan 29 16:27:15 crc kubenswrapper[4714]: I0129 16:27:15.407142 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-52f2-account-create-update-5x6mg"] Jan 29 16:27:15 crc kubenswrapper[4714]: I0129 16:27:15.494003 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6c31118-0f0d-46fb-a9fc-d135e234fe41-operator-scripts\") pod \"keystone-52f2-account-create-update-5x6mg\" (UID: \"f6c31118-0f0d-46fb-a9fc-d135e234fe41\") " pod="cinder-kuttl-tests/keystone-52f2-account-create-update-5x6mg" Jan 29 16:27:15 crc kubenswrapper[4714]: I0129 16:27:15.494049 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znwpf\" (UniqueName: \"kubernetes.io/projected/f6c31118-0f0d-46fb-a9fc-d135e234fe41-kube-api-access-znwpf\") pod \"keystone-52f2-account-create-update-5x6mg\" (UID: \"f6c31118-0f0d-46fb-a9fc-d135e234fe41\") " pod="cinder-kuttl-tests/keystone-52f2-account-create-update-5x6mg" Jan 29 16:27:15 crc kubenswrapper[4714]: I0129 16:27:15.494081 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bf4c895-a323-452d-8329-cb69a752341c-operator-scripts\") pod \"keystone-db-create-4hhfq\" (UID: \"4bf4c895-a323-452d-8329-cb69a752341c\") " pod="cinder-kuttl-tests/keystone-db-create-4hhfq" Jan 29 16:27:15 crc kubenswrapper[4714]: I0129 16:27:15.494113 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq285\" (UniqueName: \"kubernetes.io/projected/4bf4c895-a323-452d-8329-cb69a752341c-kube-api-access-sq285\") pod \"keystone-db-create-4hhfq\" (UID: \"4bf4c895-a323-452d-8329-cb69a752341c\") " pod="cinder-kuttl-tests/keystone-db-create-4hhfq" Jan 29 16:27:15 crc kubenswrapper[4714]: I0129 16:27:15.595088 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6c31118-0f0d-46fb-a9fc-d135e234fe41-operator-scripts\") pod \"keystone-52f2-account-create-update-5x6mg\" (UID: \"f6c31118-0f0d-46fb-a9fc-d135e234fe41\") " pod="cinder-kuttl-tests/keystone-52f2-account-create-update-5x6mg" Jan 29 16:27:15 crc kubenswrapper[4714]: I0129 16:27:15.595143 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znwpf\" (UniqueName: \"kubernetes.io/projected/f6c31118-0f0d-46fb-a9fc-d135e234fe41-kube-api-access-znwpf\") pod \"keystone-52f2-account-create-update-5x6mg\" (UID: \"f6c31118-0f0d-46fb-a9fc-d135e234fe41\") " pod="cinder-kuttl-tests/keystone-52f2-account-create-update-5x6mg" Jan 29 16:27:15 crc kubenswrapper[4714]: I0129 16:27:15.595180 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bf4c895-a323-452d-8329-cb69a752341c-operator-scripts\") pod \"keystone-db-create-4hhfq\" (UID: \"4bf4c895-a323-452d-8329-cb69a752341c\") " pod="cinder-kuttl-tests/keystone-db-create-4hhfq" Jan 29 16:27:15 crc kubenswrapper[4714]: I0129 16:27:15.595204 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq285\" (UniqueName: \"kubernetes.io/projected/4bf4c895-a323-452d-8329-cb69a752341c-kube-api-access-sq285\") pod \"keystone-db-create-4hhfq\" (UID: \"4bf4c895-a323-452d-8329-cb69a752341c\") " pod="cinder-kuttl-tests/keystone-db-create-4hhfq" Jan 29 16:27:15 crc kubenswrapper[4714]: I0129 16:27:15.596018 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6c31118-0f0d-46fb-a9fc-d135e234fe41-operator-scripts\") pod \"keystone-52f2-account-create-update-5x6mg\" (UID: \"f6c31118-0f0d-46fb-a9fc-d135e234fe41\") " pod="cinder-kuttl-tests/keystone-52f2-account-create-update-5x6mg" Jan 29 16:27:15 crc kubenswrapper[4714]: I0129 16:27:15.596069 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bf4c895-a323-452d-8329-cb69a752341c-operator-scripts\") pod \"keystone-db-create-4hhfq\" (UID: \"4bf4c895-a323-452d-8329-cb69a752341c\") " pod="cinder-kuttl-tests/keystone-db-create-4hhfq" Jan 29 16:27:15 crc kubenswrapper[4714]: I0129 16:27:15.613648 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znwpf\" (UniqueName: \"kubernetes.io/projected/f6c31118-0f0d-46fb-a9fc-d135e234fe41-kube-api-access-znwpf\") pod \"keystone-52f2-account-create-update-5x6mg\" (UID: \"f6c31118-0f0d-46fb-a9fc-d135e234fe41\") " pod="cinder-kuttl-tests/keystone-52f2-account-create-update-5x6mg" Jan 29 16:27:15 crc kubenswrapper[4714]: I0129 16:27:15.615050 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq285\" (UniqueName: \"kubernetes.io/projected/4bf4c895-a323-452d-8329-cb69a752341c-kube-api-access-sq285\") pod \"keystone-db-create-4hhfq\" (UID: \"4bf4c895-a323-452d-8329-cb69a752341c\") " pod="cinder-kuttl-tests/keystone-db-create-4hhfq" Jan 29 16:27:15 crc kubenswrapper[4714]: I0129 16:27:15.677284 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-create-4hhfq" Jan 29 16:27:15 crc kubenswrapper[4714]: I0129 16:27:15.696697 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-52f2-account-create-update-5x6mg" Jan 29 16:27:16 crc kubenswrapper[4714]: I0129 16:27:16.093688 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:27:16 crc kubenswrapper[4714]: I0129 16:27:16.162759 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-db-create-4hhfq"] Jan 29 16:27:16 crc kubenswrapper[4714]: I0129 16:27:16.194570 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db1a5ea4-712c-4ec8-aeae-c44998fa51e1" path="/var/lib/kubelet/pods/db1a5ea4-712c-4ec8-aeae-c44998fa51e1/volumes" Jan 29 16:27:16 crc kubenswrapper[4714]: I0129 16:27:16.233720 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-52f2-account-create-update-5x6mg"] Jan 29 16:27:16 crc kubenswrapper[4714]: I0129 16:27:16.451275 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-create-4hhfq" event={"ID":"4bf4c895-a323-452d-8329-cb69a752341c","Type":"ContainerStarted","Data":"958fc7292af56ecfa7d5c5a7066233d1295c2b0c82a6e8e4646901914aabf005"} Jan 29 16:27:16 crc kubenswrapper[4714]: I0129 16:27:16.451364 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-create-4hhfq" event={"ID":"4bf4c895-a323-452d-8329-cb69a752341c","Type":"ContainerStarted","Data":"143876cbf9d3526f84e82f093d097878163e840a2b90cc081656f409d4bac93e"} Jan 29 16:27:16 crc kubenswrapper[4714]: I0129 16:27:16.454280 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-52f2-account-create-update-5x6mg" event={"ID":"f6c31118-0f0d-46fb-a9fc-d135e234fe41","Type":"ContainerStarted","Data":"57df90298e52fbd874f183f4f349569b890f49a89aba1583825eacf13909d613"} Jan 29 16:27:16 crc kubenswrapper[4714]: I0129 16:27:16.454319 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-52f2-account-create-update-5x6mg" event={"ID":"f6c31118-0f0d-46fb-a9fc-d135e234fe41","Type":"ContainerStarted","Data":"978ec753876dcd68b52989b0c105fe5d3cde2a954d0e4d72d8a1ef936aa61c3d"} Jan 29 16:27:16 crc kubenswrapper[4714]: I0129 16:27:16.502762 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/keystone-db-create-4hhfq" podStartSLOduration=1.502743765 podStartE2EDuration="1.502743765s" podCreationTimestamp="2026-01-29 16:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:27:16.479457255 +0000 UTC m=+1042.999958375" watchObservedRunningTime="2026-01-29 16:27:16.502743765 +0000 UTC m=+1043.023244885" Jan 29 16:27:17 crc kubenswrapper[4714]: I0129 16:27:17.464221 4714 generic.go:334] "Generic (PLEG): container finished" podID="f6c31118-0f0d-46fb-a9fc-d135e234fe41" containerID="57df90298e52fbd874f183f4f349569b890f49a89aba1583825eacf13909d613" exitCode=0 Jan 29 16:27:17 crc kubenswrapper[4714]: I0129 16:27:17.464284 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-52f2-account-create-update-5x6mg" event={"ID":"f6c31118-0f0d-46fb-a9fc-d135e234fe41","Type":"ContainerDied","Data":"57df90298e52fbd874f183f4f349569b890f49a89aba1583825eacf13909d613"} Jan 29 16:27:17 crc kubenswrapper[4714]: I0129 16:27:17.466776 4714 generic.go:334] "Generic (PLEG): container finished" podID="4bf4c895-a323-452d-8329-cb69a752341c" containerID="958fc7292af56ecfa7d5c5a7066233d1295c2b0c82a6e8e4646901914aabf005" exitCode=0 Jan 29 16:27:17 crc kubenswrapper[4714]: I0129 16:27:17.466813 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-create-4hhfq" event={"ID":"4bf4c895-a323-452d-8329-cb69a752341c","Type":"ContainerDied","Data":"958fc7292af56ecfa7d5c5a7066233d1295c2b0c82a6e8e4646901914aabf005"} Jan 29 16:27:18 crc kubenswrapper[4714]: I0129 16:27:18.831358 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-create-4hhfq" Jan 29 16:27:18 crc kubenswrapper[4714]: I0129 16:27:18.837536 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-52f2-account-create-update-5x6mg" Jan 29 16:27:18 crc kubenswrapper[4714]: I0129 16:27:18.941176 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq285\" (UniqueName: \"kubernetes.io/projected/4bf4c895-a323-452d-8329-cb69a752341c-kube-api-access-sq285\") pod \"4bf4c895-a323-452d-8329-cb69a752341c\" (UID: \"4bf4c895-a323-452d-8329-cb69a752341c\") " Jan 29 16:27:18 crc kubenswrapper[4714]: I0129 16:27:18.941334 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bf4c895-a323-452d-8329-cb69a752341c-operator-scripts\") pod \"4bf4c895-a323-452d-8329-cb69a752341c\" (UID: \"4bf4c895-a323-452d-8329-cb69a752341c\") " Jan 29 16:27:18 crc kubenswrapper[4714]: I0129 16:27:18.941376 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6c31118-0f0d-46fb-a9fc-d135e234fe41-operator-scripts\") pod \"f6c31118-0f0d-46fb-a9fc-d135e234fe41\" (UID: \"f6c31118-0f0d-46fb-a9fc-d135e234fe41\") " Jan 29 16:27:18 crc kubenswrapper[4714]: I0129 16:27:18.941432 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znwpf\" (UniqueName: \"kubernetes.io/projected/f6c31118-0f0d-46fb-a9fc-d135e234fe41-kube-api-access-znwpf\") pod \"f6c31118-0f0d-46fb-a9fc-d135e234fe41\" (UID: \"f6c31118-0f0d-46fb-a9fc-d135e234fe41\") " Jan 29 16:27:18 crc kubenswrapper[4714]: I0129 16:27:18.942270 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf4c895-a323-452d-8329-cb69a752341c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4bf4c895-a323-452d-8329-cb69a752341c" (UID: "4bf4c895-a323-452d-8329-cb69a752341c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:27:18 crc kubenswrapper[4714]: I0129 16:27:18.942337 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6c31118-0f0d-46fb-a9fc-d135e234fe41-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f6c31118-0f0d-46fb-a9fc-d135e234fe41" (UID: "f6c31118-0f0d-46fb-a9fc-d135e234fe41"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:27:18 crc kubenswrapper[4714]: I0129 16:27:18.949076 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf4c895-a323-452d-8329-cb69a752341c-kube-api-access-sq285" (OuterVolumeSpecName: "kube-api-access-sq285") pod "4bf4c895-a323-452d-8329-cb69a752341c" (UID: "4bf4c895-a323-452d-8329-cb69a752341c"). InnerVolumeSpecName "kube-api-access-sq285". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:27:18 crc kubenswrapper[4714]: I0129 16:27:18.952144 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6c31118-0f0d-46fb-a9fc-d135e234fe41-kube-api-access-znwpf" (OuterVolumeSpecName: "kube-api-access-znwpf") pod "f6c31118-0f0d-46fb-a9fc-d135e234fe41" (UID: "f6c31118-0f0d-46fb-a9fc-d135e234fe41"). InnerVolumeSpecName "kube-api-access-znwpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:27:19 crc kubenswrapper[4714]: I0129 16:27:19.043749 4714 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6c31118-0f0d-46fb-a9fc-d135e234fe41-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:19 crc kubenswrapper[4714]: I0129 16:27:19.043800 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znwpf\" (UniqueName: \"kubernetes.io/projected/f6c31118-0f0d-46fb-a9fc-d135e234fe41-kube-api-access-znwpf\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:19 crc kubenswrapper[4714]: I0129 16:27:19.043823 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq285\" (UniqueName: \"kubernetes.io/projected/4bf4c895-a323-452d-8329-cb69a752341c-kube-api-access-sq285\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:19 crc kubenswrapper[4714]: I0129 16:27:19.043843 4714 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bf4c895-a323-452d-8329-cb69a752341c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:19 crc kubenswrapper[4714]: I0129 16:27:19.482409 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-create-4hhfq" event={"ID":"4bf4c895-a323-452d-8329-cb69a752341c","Type":"ContainerDied","Data":"143876cbf9d3526f84e82f093d097878163e840a2b90cc081656f409d4bac93e"} Jan 29 16:27:19 crc kubenswrapper[4714]: I0129 16:27:19.483114 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="143876cbf9d3526f84e82f093d097878163e840a2b90cc081656f409d4bac93e" Jan 29 16:27:19 crc kubenswrapper[4714]: I0129 16:27:19.482474 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-create-4hhfq" Jan 29 16:27:19 crc kubenswrapper[4714]: I0129 16:27:19.484328 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-52f2-account-create-update-5x6mg" event={"ID":"f6c31118-0f0d-46fb-a9fc-d135e234fe41","Type":"ContainerDied","Data":"978ec753876dcd68b52989b0c105fe5d3cde2a954d0e4d72d8a1ef936aa61c3d"} Jan 29 16:27:19 crc kubenswrapper[4714]: I0129 16:27:19.484366 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="978ec753876dcd68b52989b0c105fe5d3cde2a954d0e4d72d8a1ef936aa61c3d" Jan 29 16:27:19 crc kubenswrapper[4714]: I0129 16:27:19.484426 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-52f2-account-create-update-5x6mg" Jan 29 16:27:19 crc kubenswrapper[4714]: I0129 16:27:19.537611 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/cinder-operator-index-d7f6m" Jan 29 16:27:19 crc kubenswrapper[4714]: I0129 16:27:19.537655 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-index-d7f6m" Jan 29 16:27:19 crc kubenswrapper[4714]: I0129 16:27:19.577366 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/cinder-operator-index-d7f6m" Jan 29 16:27:20 crc kubenswrapper[4714]: I0129 16:27:20.532894 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-index-d7f6m" Jan 29 16:27:21 crc kubenswrapper[4714]: I0129 16:27:21.023058 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/keystone-db-sync-6xxc6"] Jan 29 16:27:21 crc kubenswrapper[4714]: E0129 16:27:21.023684 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c31118-0f0d-46fb-a9fc-d135e234fe41" containerName="mariadb-account-create-update" Jan 29 16:27:21 crc kubenswrapper[4714]: I0129 16:27:21.023708 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c31118-0f0d-46fb-a9fc-d135e234fe41" containerName="mariadb-account-create-update" Jan 29 16:27:21 crc kubenswrapper[4714]: E0129 16:27:21.023728 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf4c895-a323-452d-8329-cb69a752341c" containerName="mariadb-database-create" Jan 29 16:27:21 crc kubenswrapper[4714]: I0129 16:27:21.023737 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf4c895-a323-452d-8329-cb69a752341c" containerName="mariadb-database-create" Jan 29 16:27:21 crc kubenswrapper[4714]: I0129 16:27:21.023890 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf4c895-a323-452d-8329-cb69a752341c" containerName="mariadb-database-create" Jan 29 16:27:21 crc kubenswrapper[4714]: I0129 16:27:21.023913 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6c31118-0f0d-46fb-a9fc-d135e234fe41" containerName="mariadb-account-create-update" Jan 29 16:27:21 crc kubenswrapper[4714]: I0129 16:27:21.024612 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-sync-6xxc6" Jan 29 16:27:21 crc kubenswrapper[4714]: I0129 16:27:21.026277 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-scripts" Jan 29 16:27:21 crc kubenswrapper[4714]: I0129 16:27:21.026535 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-config-data" Jan 29 16:27:21 crc kubenswrapper[4714]: I0129 16:27:21.027167 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone" Jan 29 16:27:21 crc kubenswrapper[4714]: I0129 16:27:21.032404 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-db-sync-6xxc6"] Jan 29 16:27:21 crc kubenswrapper[4714]: I0129 16:27:21.032691 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-keystone-dockercfg-6m228" Jan 29 16:27:21 crc kubenswrapper[4714]: I0129 16:27:21.071412 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e33ba3d-9561-441b-b835-fbdb6ce97d23-config-data\") pod \"keystone-db-sync-6xxc6\" (UID: \"6e33ba3d-9561-441b-b835-fbdb6ce97d23\") " pod="cinder-kuttl-tests/keystone-db-sync-6xxc6" Jan 29 16:27:21 crc kubenswrapper[4714]: I0129 16:27:21.071485 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7s7m\" (UniqueName: \"kubernetes.io/projected/6e33ba3d-9561-441b-b835-fbdb6ce97d23-kube-api-access-f7s7m\") pod \"keystone-db-sync-6xxc6\" (UID: \"6e33ba3d-9561-441b-b835-fbdb6ce97d23\") " pod="cinder-kuttl-tests/keystone-db-sync-6xxc6" Jan 29 16:27:21 crc kubenswrapper[4714]: I0129 16:27:21.173359 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7s7m\" (UniqueName: \"kubernetes.io/projected/6e33ba3d-9561-441b-b835-fbdb6ce97d23-kube-api-access-f7s7m\") pod \"keystone-db-sync-6xxc6\" (UID: \"6e33ba3d-9561-441b-b835-fbdb6ce97d23\") " pod="cinder-kuttl-tests/keystone-db-sync-6xxc6" Jan 29 16:27:21 crc kubenswrapper[4714]: I0129 16:27:21.173496 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e33ba3d-9561-441b-b835-fbdb6ce97d23-config-data\") pod \"keystone-db-sync-6xxc6\" (UID: \"6e33ba3d-9561-441b-b835-fbdb6ce97d23\") " pod="cinder-kuttl-tests/keystone-db-sync-6xxc6" Jan 29 16:27:21 crc kubenswrapper[4714]: I0129 16:27:21.179023 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e33ba3d-9561-441b-b835-fbdb6ce97d23-config-data\") pod \"keystone-db-sync-6xxc6\" (UID: \"6e33ba3d-9561-441b-b835-fbdb6ce97d23\") " pod="cinder-kuttl-tests/keystone-db-sync-6xxc6" Jan 29 16:27:21 crc kubenswrapper[4714]: I0129 16:27:21.203782 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7s7m\" (UniqueName: \"kubernetes.io/projected/6e33ba3d-9561-441b-b835-fbdb6ce97d23-kube-api-access-f7s7m\") pod \"keystone-db-sync-6xxc6\" (UID: \"6e33ba3d-9561-441b-b835-fbdb6ce97d23\") " pod="cinder-kuttl-tests/keystone-db-sync-6xxc6" Jan 29 16:27:21 crc kubenswrapper[4714]: I0129 16:27:21.341770 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-sync-6xxc6" Jan 29 16:27:21 crc kubenswrapper[4714]: I0129 16:27:21.831154 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-db-sync-6xxc6"] Jan 29 16:27:22 crc kubenswrapper[4714]: I0129 16:27:22.406165 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6"] Jan 29 16:27:22 crc kubenswrapper[4714]: I0129 16:27:22.408537 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6" Jan 29 16:27:22 crc kubenswrapper[4714]: I0129 16:27:22.412521 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hwqbr" Jan 29 16:27:22 crc kubenswrapper[4714]: I0129 16:27:22.419749 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6"] Jan 29 16:27:22 crc kubenswrapper[4714]: I0129 16:27:22.514469 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-sync-6xxc6" event={"ID":"6e33ba3d-9561-441b-b835-fbdb6ce97d23","Type":"ContainerStarted","Data":"cb8e5c568c512500695d36838718c30771a365e70a1f7bf91812aeb8914126a0"} Jan 29 16:27:22 crc kubenswrapper[4714]: I0129 16:27:22.514782 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4wwh\" (UniqueName: \"kubernetes.io/projected/0eecc358-9581-489e-97ae-f600d35a7613-kube-api-access-k4wwh\") pod \"60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6\" (UID: \"0eecc358-9581-489e-97ae-f600d35a7613\") " pod="openstack-operators/60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6" Jan 29 16:27:22 crc kubenswrapper[4714]: I0129 16:27:22.514832 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0eecc358-9581-489e-97ae-f600d35a7613-util\") pod \"60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6\" (UID: \"0eecc358-9581-489e-97ae-f600d35a7613\") " pod="openstack-operators/60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6" Jan 29 16:27:22 crc kubenswrapper[4714]: I0129 16:27:22.514901 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0eecc358-9581-489e-97ae-f600d35a7613-bundle\") pod \"60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6\" (UID: \"0eecc358-9581-489e-97ae-f600d35a7613\") " pod="openstack-operators/60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6" Jan 29 16:27:22 crc kubenswrapper[4714]: I0129 16:27:22.615705 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4wwh\" (UniqueName: \"kubernetes.io/projected/0eecc358-9581-489e-97ae-f600d35a7613-kube-api-access-k4wwh\") pod \"60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6\" (UID: \"0eecc358-9581-489e-97ae-f600d35a7613\") " pod="openstack-operators/60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6" Jan 29 16:27:22 crc kubenswrapper[4714]: I0129 16:27:22.615752 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0eecc358-9581-489e-97ae-f600d35a7613-util\") pod \"60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6\" (UID: \"0eecc358-9581-489e-97ae-f600d35a7613\") " pod="openstack-operators/60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6" Jan 29 16:27:22 crc kubenswrapper[4714]: I0129 16:27:22.615782 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0eecc358-9581-489e-97ae-f600d35a7613-bundle\") pod \"60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6\" (UID: \"0eecc358-9581-489e-97ae-f600d35a7613\") " pod="openstack-operators/60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6" Jan 29 16:27:22 crc kubenswrapper[4714]: I0129 16:27:22.616315 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0eecc358-9581-489e-97ae-f600d35a7613-bundle\") pod \"60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6\" (UID: \"0eecc358-9581-489e-97ae-f600d35a7613\") " pod="openstack-operators/60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6" Jan 29 16:27:22 crc kubenswrapper[4714]: I0129 16:27:22.616412 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0eecc358-9581-489e-97ae-f600d35a7613-util\") pod \"60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6\" (UID: \"0eecc358-9581-489e-97ae-f600d35a7613\") " pod="openstack-operators/60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6" Jan 29 16:27:22 crc kubenswrapper[4714]: I0129 16:27:22.649614 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4wwh\" (UniqueName: \"kubernetes.io/projected/0eecc358-9581-489e-97ae-f600d35a7613-kube-api-access-k4wwh\") pod \"60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6\" (UID: \"0eecc358-9581-489e-97ae-f600d35a7613\") " pod="openstack-operators/60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6" Jan 29 16:27:22 crc kubenswrapper[4714]: I0129 16:27:22.737213 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6" Jan 29 16:27:23 crc kubenswrapper[4714]: I0129 16:27:23.181180 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6"] Jan 29 16:27:23 crc kubenswrapper[4714]: I0129 16:27:23.522668 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6" event={"ID":"0eecc358-9581-489e-97ae-f600d35a7613","Type":"ContainerStarted","Data":"6f5e7b1c14376dcb0e62786390737b48cb04d2a05079d8160df0f9dedc56dfc8"} Jan 29 16:27:23 crc kubenswrapper[4714]: I0129 16:27:23.523217 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6" event={"ID":"0eecc358-9581-489e-97ae-f600d35a7613","Type":"ContainerStarted","Data":"ccf8807833090dda1a00ca4aa3c347ca65828dd70468671dc4659950b5c10bcf"} Jan 29 16:27:24 crc kubenswrapper[4714]: I0129 16:27:24.536217 4714 generic.go:334] "Generic (PLEG): container finished" podID="0eecc358-9581-489e-97ae-f600d35a7613" containerID="6f5e7b1c14376dcb0e62786390737b48cb04d2a05079d8160df0f9dedc56dfc8" exitCode=0 Jan 29 16:27:24 crc kubenswrapper[4714]: I0129 16:27:24.536261 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6" event={"ID":"0eecc358-9581-489e-97ae-f600d35a7613","Type":"ContainerDied","Data":"6f5e7b1c14376dcb0e62786390737b48cb04d2a05079d8160df0f9dedc56dfc8"} Jan 29 16:27:27 crc kubenswrapper[4714]: I0129 16:27:27.844279 4714 patch_prober.go:28] interesting pod/machine-config-daemon-ppngk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:27:27 crc kubenswrapper[4714]: I0129 16:27:27.845009 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:27:27 crc kubenswrapper[4714]: I0129 16:27:27.845152 4714 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" Jan 29 16:27:27 crc kubenswrapper[4714]: I0129 16:27:27.845793 4714 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"77045db0ac9dbee23fe648e58207222e15e50d5178fcc5cc7a606b4bbe2af7ec"} pod="openshift-machine-config-operator/machine-config-daemon-ppngk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:27:27 crc kubenswrapper[4714]: I0129 16:27:27.845842 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" containerID="cri-o://77045db0ac9dbee23fe648e58207222e15e50d5178fcc5cc7a606b4bbe2af7ec" gracePeriod=600 Jan 29 16:27:28 crc kubenswrapper[4714]: I0129 16:27:28.537973 4714 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:27:28 crc kubenswrapper[4714]: I0129 16:27:28.573826 4714 generic.go:334] "Generic (PLEG): container finished" podID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerID="77045db0ac9dbee23fe648e58207222e15e50d5178fcc5cc7a606b4bbe2af7ec" exitCode=0 Jan 29 16:27:28 crc kubenswrapper[4714]: I0129 16:27:28.573892 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" event={"ID":"c8c765f3-89eb-4077-8829-03e86eb0c90c","Type":"ContainerDied","Data":"77045db0ac9dbee23fe648e58207222e15e50d5178fcc5cc7a606b4bbe2af7ec"} Jan 29 16:27:28 crc kubenswrapper[4714]: I0129 16:27:28.574022 4714 scope.go:117] "RemoveContainer" containerID="434181b332ad91829c9ca3b07c475cac7d3c8b013492e90ce07fd88776d24efa" Jan 29 16:27:28 crc kubenswrapper[4714]: E0129 16:27:28.677064 4714 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:27:28 crc kubenswrapper[4714]: E0129 16:27:28.677336 4714 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tmm88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wdwq5_openshift-marketplace(8c12ad14-f878-42a1-a168-bad4026ec2dd): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:27:28 crc kubenswrapper[4714]: E0129 16:27:28.678693 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:27:29 crc kubenswrapper[4714]: I0129 16:27:29.581748 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" event={"ID":"c8c765f3-89eb-4077-8829-03e86eb0c90c","Type":"ContainerStarted","Data":"6d286411f160a5fdbd13efa6bbfae544ec01e44f19ea6b8ff05d4ab9953a5f4a"} Jan 29 16:27:29 crc kubenswrapper[4714]: I0129 16:27:29.583180 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-sync-6xxc6" event={"ID":"6e33ba3d-9561-441b-b835-fbdb6ce97d23","Type":"ContainerStarted","Data":"02c31eb5896b8dc80e78bcd830c0e2f150e33491e2a451310581ca2dc793d036"} Jan 29 16:27:29 crc kubenswrapper[4714]: I0129 16:27:29.585761 4714 generic.go:334] "Generic (PLEG): container finished" podID="0eecc358-9581-489e-97ae-f600d35a7613" containerID="8f755ca88cec23079c8fcc603a70054716d3829ccb1c6da9ff0f5feff88b5796" exitCode=0 Jan 29 16:27:29 crc kubenswrapper[4714]: I0129 16:27:29.585793 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6" event={"ID":"0eecc358-9581-489e-97ae-f600d35a7613","Type":"ContainerDied","Data":"8f755ca88cec23079c8fcc603a70054716d3829ccb1c6da9ff0f5feff88b5796"} Jan 29 16:27:29 crc kubenswrapper[4714]: I0129 16:27:29.611670 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/keystone-db-sync-6xxc6" podStartSLOduration=1.8190485330000001 podStartE2EDuration="8.611652333s" podCreationTimestamp="2026-01-29 16:27:21 +0000 UTC" firstStartedPulling="2026-01-29 16:27:21.843499267 +0000 UTC m=+1048.364000387" lastFinishedPulling="2026-01-29 16:27:28.636103047 +0000 UTC m=+1055.156604187" observedRunningTime="2026-01-29 16:27:29.611333416 +0000 UTC m=+1056.131834546" watchObservedRunningTime="2026-01-29 16:27:29.611652333 +0000 UTC m=+1056.132153463" Jan 29 16:27:30 crc kubenswrapper[4714]: I0129 16:27:30.594257 4714 generic.go:334] "Generic (PLEG): container finished" podID="0eecc358-9581-489e-97ae-f600d35a7613" containerID="a675f4f90dd437578a32683cafb8b1908c7b80f63189aec46569a29c2add56c0" exitCode=0 Jan 29 16:27:30 crc kubenswrapper[4714]: I0129 16:27:30.594300 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6" event={"ID":"0eecc358-9581-489e-97ae-f600d35a7613","Type":"ContainerDied","Data":"a675f4f90dd437578a32683cafb8b1908c7b80f63189aec46569a29c2add56c0"} Jan 29 16:27:31 crc kubenswrapper[4714]: I0129 16:27:31.869042 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6" Jan 29 16:27:31 crc kubenswrapper[4714]: I0129 16:27:31.991845 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0eecc358-9581-489e-97ae-f600d35a7613-bundle\") pod \"0eecc358-9581-489e-97ae-f600d35a7613\" (UID: \"0eecc358-9581-489e-97ae-f600d35a7613\") " Jan 29 16:27:31 crc kubenswrapper[4714]: I0129 16:27:31.993056 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eecc358-9581-489e-97ae-f600d35a7613-bundle" (OuterVolumeSpecName: "bundle") pod "0eecc358-9581-489e-97ae-f600d35a7613" (UID: "0eecc358-9581-489e-97ae-f600d35a7613"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:27:31 crc kubenswrapper[4714]: I0129 16:27:31.993237 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0eecc358-9581-489e-97ae-f600d35a7613-util\") pod \"0eecc358-9581-489e-97ae-f600d35a7613\" (UID: \"0eecc358-9581-489e-97ae-f600d35a7613\") " Jan 29 16:27:31 crc kubenswrapper[4714]: I0129 16:27:31.993416 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4wwh\" (UniqueName: \"kubernetes.io/projected/0eecc358-9581-489e-97ae-f600d35a7613-kube-api-access-k4wwh\") pod \"0eecc358-9581-489e-97ae-f600d35a7613\" (UID: \"0eecc358-9581-489e-97ae-f600d35a7613\") " Jan 29 16:27:31 crc kubenswrapper[4714]: I0129 16:27:31.994757 4714 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0eecc358-9581-489e-97ae-f600d35a7613-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:31 crc kubenswrapper[4714]: I0129 16:27:31.998998 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eecc358-9581-489e-97ae-f600d35a7613-kube-api-access-k4wwh" (OuterVolumeSpecName: "kube-api-access-k4wwh") pod "0eecc358-9581-489e-97ae-f600d35a7613" (UID: "0eecc358-9581-489e-97ae-f600d35a7613"). InnerVolumeSpecName "kube-api-access-k4wwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:27:32 crc kubenswrapper[4714]: I0129 16:27:32.003493 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eecc358-9581-489e-97ae-f600d35a7613-util" (OuterVolumeSpecName: "util") pod "0eecc358-9581-489e-97ae-f600d35a7613" (UID: "0eecc358-9581-489e-97ae-f600d35a7613"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:27:32 crc kubenswrapper[4714]: I0129 16:27:32.096231 4714 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0eecc358-9581-489e-97ae-f600d35a7613-util\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:32 crc kubenswrapper[4714]: I0129 16:27:32.096284 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4wwh\" (UniqueName: \"kubernetes.io/projected/0eecc358-9581-489e-97ae-f600d35a7613-kube-api-access-k4wwh\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:32 crc kubenswrapper[4714]: I0129 16:27:32.612601 4714 generic.go:334] "Generic (PLEG): container finished" podID="6e33ba3d-9561-441b-b835-fbdb6ce97d23" containerID="02c31eb5896b8dc80e78bcd830c0e2f150e33491e2a451310581ca2dc793d036" exitCode=0 Jan 29 16:27:32 crc kubenswrapper[4714]: I0129 16:27:32.612691 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-sync-6xxc6" event={"ID":"6e33ba3d-9561-441b-b835-fbdb6ce97d23","Type":"ContainerDied","Data":"02c31eb5896b8dc80e78bcd830c0e2f150e33491e2a451310581ca2dc793d036"} Jan 29 16:27:32 crc kubenswrapper[4714]: I0129 16:27:32.615790 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6" event={"ID":"0eecc358-9581-489e-97ae-f600d35a7613","Type":"ContainerDied","Data":"ccf8807833090dda1a00ca4aa3c347ca65828dd70468671dc4659950b5c10bcf"} Jan 29 16:27:32 crc kubenswrapper[4714]: I0129 16:27:32.615829 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccf8807833090dda1a00ca4aa3c347ca65828dd70468671dc4659950b5c10bcf" Jan 29 16:27:32 crc kubenswrapper[4714]: I0129 16:27:32.616189 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6" Jan 29 16:27:33 crc kubenswrapper[4714]: I0129 16:27:33.930059 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-sync-6xxc6" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.124480 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7s7m\" (UniqueName: \"kubernetes.io/projected/6e33ba3d-9561-441b-b835-fbdb6ce97d23-kube-api-access-f7s7m\") pod \"6e33ba3d-9561-441b-b835-fbdb6ce97d23\" (UID: \"6e33ba3d-9561-441b-b835-fbdb6ce97d23\") " Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.124544 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e33ba3d-9561-441b-b835-fbdb6ce97d23-config-data\") pod \"6e33ba3d-9561-441b-b835-fbdb6ce97d23\" (UID: \"6e33ba3d-9561-441b-b835-fbdb6ce97d23\") " Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.129919 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e33ba3d-9561-441b-b835-fbdb6ce97d23-kube-api-access-f7s7m" (OuterVolumeSpecName: "kube-api-access-f7s7m") pod "6e33ba3d-9561-441b-b835-fbdb6ce97d23" (UID: "6e33ba3d-9561-441b-b835-fbdb6ce97d23"). InnerVolumeSpecName "kube-api-access-f7s7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.154431 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e33ba3d-9561-441b-b835-fbdb6ce97d23-config-data" (OuterVolumeSpecName: "config-data") pod "6e33ba3d-9561-441b-b835-fbdb6ce97d23" (UID: "6e33ba3d-9561-441b-b835-fbdb6ce97d23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.226411 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7s7m\" (UniqueName: \"kubernetes.io/projected/6e33ba3d-9561-441b-b835-fbdb6ce97d23-kube-api-access-f7s7m\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.226444 4714 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e33ba3d-9561-441b-b835-fbdb6ce97d23-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.634538 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-sync-6xxc6" event={"ID":"6e33ba3d-9561-441b-b835-fbdb6ce97d23","Type":"ContainerDied","Data":"cb8e5c568c512500695d36838718c30771a365e70a1f7bf91812aeb8914126a0"} Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.634587 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb8e5c568c512500695d36838718c30771a365e70a1f7bf91812aeb8914126a0" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.634616 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-sync-6xxc6" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.805257 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/keystone-bootstrap-kj27d"] Jan 29 16:27:34 crc kubenswrapper[4714]: E0129 16:27:34.805502 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eecc358-9581-489e-97ae-f600d35a7613" containerName="extract" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.805513 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eecc358-9581-489e-97ae-f600d35a7613" containerName="extract" Jan 29 16:27:34 crc kubenswrapper[4714]: E0129 16:27:34.805524 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eecc358-9581-489e-97ae-f600d35a7613" containerName="pull" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.805529 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eecc358-9581-489e-97ae-f600d35a7613" containerName="pull" Jan 29 16:27:34 crc kubenswrapper[4714]: E0129 16:27:34.805546 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eecc358-9581-489e-97ae-f600d35a7613" containerName="util" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.805552 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eecc358-9581-489e-97ae-f600d35a7613" containerName="util" Jan 29 16:27:34 crc kubenswrapper[4714]: E0129 16:27:34.805562 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e33ba3d-9561-441b-b835-fbdb6ce97d23" containerName="keystone-db-sync" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.805568 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e33ba3d-9561-441b-b835-fbdb6ce97d23" containerName="keystone-db-sync" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.805672 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eecc358-9581-489e-97ae-f600d35a7613" containerName="extract" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.805680 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e33ba3d-9561-441b-b835-fbdb6ce97d23" containerName="keystone-db-sync" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.806114 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-bootstrap-kj27d" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.809094 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-keystone-dockercfg-6m228" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.809561 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-scripts" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.810253 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"osp-secret" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.811759 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.812037 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-config-data" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.815314 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-bootstrap-kj27d"] Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.835517 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f3e03982-d953-488f-a01a-5024f64ad7da-credential-keys\") pod \"keystone-bootstrap-kj27d\" (UID: \"f3e03982-d953-488f-a01a-5024f64ad7da\") " pod="cinder-kuttl-tests/keystone-bootstrap-kj27d" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.835566 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e03982-d953-488f-a01a-5024f64ad7da-config-data\") pod \"keystone-bootstrap-kj27d\" (UID: \"f3e03982-d953-488f-a01a-5024f64ad7da\") " pod="cinder-kuttl-tests/keystone-bootstrap-kj27d" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.835586 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f3e03982-d953-488f-a01a-5024f64ad7da-fernet-keys\") pod \"keystone-bootstrap-kj27d\" (UID: \"f3e03982-d953-488f-a01a-5024f64ad7da\") " pod="cinder-kuttl-tests/keystone-bootstrap-kj27d" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.835614 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7nmp\" (UniqueName: \"kubernetes.io/projected/f3e03982-d953-488f-a01a-5024f64ad7da-kube-api-access-x7nmp\") pod \"keystone-bootstrap-kj27d\" (UID: \"f3e03982-d953-488f-a01a-5024f64ad7da\") " pod="cinder-kuttl-tests/keystone-bootstrap-kj27d" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.835651 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3e03982-d953-488f-a01a-5024f64ad7da-scripts\") pod \"keystone-bootstrap-kj27d\" (UID: \"f3e03982-d953-488f-a01a-5024f64ad7da\") " pod="cinder-kuttl-tests/keystone-bootstrap-kj27d" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.936684 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f3e03982-d953-488f-a01a-5024f64ad7da-credential-keys\") pod \"keystone-bootstrap-kj27d\" (UID: \"f3e03982-d953-488f-a01a-5024f64ad7da\") " pod="cinder-kuttl-tests/keystone-bootstrap-kj27d" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.936757 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e03982-d953-488f-a01a-5024f64ad7da-config-data\") pod \"keystone-bootstrap-kj27d\" (UID: \"f3e03982-d953-488f-a01a-5024f64ad7da\") " pod="cinder-kuttl-tests/keystone-bootstrap-kj27d" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.936780 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f3e03982-d953-488f-a01a-5024f64ad7da-fernet-keys\") pod \"keystone-bootstrap-kj27d\" (UID: \"f3e03982-d953-488f-a01a-5024f64ad7da\") " pod="cinder-kuttl-tests/keystone-bootstrap-kj27d" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.936824 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7nmp\" (UniqueName: \"kubernetes.io/projected/f3e03982-d953-488f-a01a-5024f64ad7da-kube-api-access-x7nmp\") pod \"keystone-bootstrap-kj27d\" (UID: \"f3e03982-d953-488f-a01a-5024f64ad7da\") " pod="cinder-kuttl-tests/keystone-bootstrap-kj27d" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.937623 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3e03982-d953-488f-a01a-5024f64ad7da-scripts\") pod \"keystone-bootstrap-kj27d\" (UID: \"f3e03982-d953-488f-a01a-5024f64ad7da\") " pod="cinder-kuttl-tests/keystone-bootstrap-kj27d" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.942671 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f3e03982-d953-488f-a01a-5024f64ad7da-credential-keys\") pod \"keystone-bootstrap-kj27d\" (UID: \"f3e03982-d953-488f-a01a-5024f64ad7da\") " pod="cinder-kuttl-tests/keystone-bootstrap-kj27d" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.943583 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f3e03982-d953-488f-a01a-5024f64ad7da-fernet-keys\") pod \"keystone-bootstrap-kj27d\" (UID: \"f3e03982-d953-488f-a01a-5024f64ad7da\") " pod="cinder-kuttl-tests/keystone-bootstrap-kj27d" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.944859 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e03982-d953-488f-a01a-5024f64ad7da-config-data\") pod \"keystone-bootstrap-kj27d\" (UID: \"f3e03982-d953-488f-a01a-5024f64ad7da\") " pod="cinder-kuttl-tests/keystone-bootstrap-kj27d" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.946793 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3e03982-d953-488f-a01a-5024f64ad7da-scripts\") pod \"keystone-bootstrap-kj27d\" (UID: \"f3e03982-d953-488f-a01a-5024f64ad7da\") " pod="cinder-kuttl-tests/keystone-bootstrap-kj27d" Jan 29 16:27:34 crc kubenswrapper[4714]: I0129 16:27:34.963556 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7nmp\" (UniqueName: \"kubernetes.io/projected/f3e03982-d953-488f-a01a-5024f64ad7da-kube-api-access-x7nmp\") pod \"keystone-bootstrap-kj27d\" (UID: \"f3e03982-d953-488f-a01a-5024f64ad7da\") " pod="cinder-kuttl-tests/keystone-bootstrap-kj27d" Jan 29 16:27:35 crc kubenswrapper[4714]: I0129 16:27:35.131731 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-bootstrap-kj27d" Jan 29 16:27:35 crc kubenswrapper[4714]: I0129 16:27:35.597879 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-bootstrap-kj27d"] Jan 29 16:27:35 crc kubenswrapper[4714]: I0129 16:27:35.663405 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-bootstrap-kj27d" event={"ID":"f3e03982-d953-488f-a01a-5024f64ad7da","Type":"ContainerStarted","Data":"a624e5601997570e21553a0e88b80fa0db701686457a6ca234e4cc2c3067c658"} Jan 29 16:27:36 crc kubenswrapper[4714]: I0129 16:27:36.671170 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-bootstrap-kj27d" event={"ID":"f3e03982-d953-488f-a01a-5024f64ad7da","Type":"ContainerStarted","Data":"6cf56f6dac5db6cefc7926b5a24bd8c2963224c5d6e15dd78662ec20f0cf0141"} Jan 29 16:27:36 crc kubenswrapper[4714]: I0129 16:27:36.688491 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/keystone-bootstrap-kj27d" podStartSLOduration=2.688476602 podStartE2EDuration="2.688476602s" podCreationTimestamp="2026-01-29 16:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:27:36.686893358 +0000 UTC m=+1063.207394478" watchObservedRunningTime="2026-01-29 16:27:36.688476602 +0000 UTC m=+1063.208977722" Jan 29 16:27:38 crc kubenswrapper[4714]: I0129 16:27:38.686219 4714 generic.go:334] "Generic (PLEG): container finished" podID="f3e03982-d953-488f-a01a-5024f64ad7da" containerID="6cf56f6dac5db6cefc7926b5a24bd8c2963224c5d6e15dd78662ec20f0cf0141" exitCode=0 Jan 29 16:27:38 crc kubenswrapper[4714]: I0129 16:27:38.686262 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-bootstrap-kj27d" event={"ID":"f3e03982-d953-488f-a01a-5024f64ad7da","Type":"ContainerDied","Data":"6cf56f6dac5db6cefc7926b5a24bd8c2963224c5d6e15dd78662ec20f0cf0141"} Jan 29 16:27:39 crc kubenswrapper[4714]: I0129 16:27:39.970498 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-bootstrap-kj27d" Jan 29 16:27:40 crc kubenswrapper[4714]: I0129 16:27:40.117458 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7nmp\" (UniqueName: \"kubernetes.io/projected/f3e03982-d953-488f-a01a-5024f64ad7da-kube-api-access-x7nmp\") pod \"f3e03982-d953-488f-a01a-5024f64ad7da\" (UID: \"f3e03982-d953-488f-a01a-5024f64ad7da\") " Jan 29 16:27:40 crc kubenswrapper[4714]: I0129 16:27:40.117551 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e03982-d953-488f-a01a-5024f64ad7da-config-data\") pod \"f3e03982-d953-488f-a01a-5024f64ad7da\" (UID: \"f3e03982-d953-488f-a01a-5024f64ad7da\") " Jan 29 16:27:40 crc kubenswrapper[4714]: I0129 16:27:40.121795 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3e03982-d953-488f-a01a-5024f64ad7da-scripts\") pod \"f3e03982-d953-488f-a01a-5024f64ad7da\" (UID: \"f3e03982-d953-488f-a01a-5024f64ad7da\") " Jan 29 16:27:40 crc kubenswrapper[4714]: I0129 16:27:40.122260 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f3e03982-d953-488f-a01a-5024f64ad7da-credential-keys\") pod \"f3e03982-d953-488f-a01a-5024f64ad7da\" (UID: \"f3e03982-d953-488f-a01a-5024f64ad7da\") " Jan 29 16:27:40 crc kubenswrapper[4714]: I0129 16:27:40.122452 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f3e03982-d953-488f-a01a-5024f64ad7da-fernet-keys\") pod \"f3e03982-d953-488f-a01a-5024f64ad7da\" (UID: \"f3e03982-d953-488f-a01a-5024f64ad7da\") " Jan 29 16:27:40 crc kubenswrapper[4714]: I0129 16:27:40.125179 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e03982-d953-488f-a01a-5024f64ad7da-scripts" (OuterVolumeSpecName: "scripts") pod "f3e03982-d953-488f-a01a-5024f64ad7da" (UID: "f3e03982-d953-488f-a01a-5024f64ad7da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:27:40 crc kubenswrapper[4714]: I0129 16:27:40.125300 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3e03982-d953-488f-a01a-5024f64ad7da-kube-api-access-x7nmp" (OuterVolumeSpecName: "kube-api-access-x7nmp") pod "f3e03982-d953-488f-a01a-5024f64ad7da" (UID: "f3e03982-d953-488f-a01a-5024f64ad7da"). InnerVolumeSpecName "kube-api-access-x7nmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:27:40 crc kubenswrapper[4714]: I0129 16:27:40.142173 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e03982-d953-488f-a01a-5024f64ad7da-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f3e03982-d953-488f-a01a-5024f64ad7da" (UID: "f3e03982-d953-488f-a01a-5024f64ad7da"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:27:40 crc kubenswrapper[4714]: I0129 16:27:40.143406 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e03982-d953-488f-a01a-5024f64ad7da-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f3e03982-d953-488f-a01a-5024f64ad7da" (UID: "f3e03982-d953-488f-a01a-5024f64ad7da"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:27:40 crc kubenswrapper[4714]: I0129 16:27:40.153681 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e03982-d953-488f-a01a-5024f64ad7da-config-data" (OuterVolumeSpecName: "config-data") pod "f3e03982-d953-488f-a01a-5024f64ad7da" (UID: "f3e03982-d953-488f-a01a-5024f64ad7da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:27:40 crc kubenswrapper[4714]: I0129 16:27:40.224725 4714 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f3e03982-d953-488f-a01a-5024f64ad7da-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:40 crc kubenswrapper[4714]: I0129 16:27:40.224764 4714 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f3e03982-d953-488f-a01a-5024f64ad7da-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:40 crc kubenswrapper[4714]: I0129 16:27:40.224776 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7nmp\" (UniqueName: \"kubernetes.io/projected/f3e03982-d953-488f-a01a-5024f64ad7da-kube-api-access-x7nmp\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:40 crc kubenswrapper[4714]: I0129 16:27:40.224787 4714 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3e03982-d953-488f-a01a-5024f64ad7da-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:40 crc kubenswrapper[4714]: I0129 16:27:40.224798 4714 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e03982-d953-488f-a01a-5024f64ad7da-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:40 crc kubenswrapper[4714]: I0129 16:27:40.698251 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-bootstrap-kj27d" event={"ID":"f3e03982-d953-488f-a01a-5024f64ad7da","Type":"ContainerDied","Data":"a624e5601997570e21553a0e88b80fa0db701686457a6ca234e4cc2c3067c658"} Jan 29 16:27:40 crc kubenswrapper[4714]: I0129 16:27:40.698538 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a624e5601997570e21553a0e88b80fa0db701686457a6ca234e4cc2c3067c658" Jan 29 16:27:40 crc kubenswrapper[4714]: I0129 16:27:40.698311 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-bootstrap-kj27d" Jan 29 16:27:40 crc kubenswrapper[4714]: I0129 16:27:40.863686 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/keystone-db9b49999-6gd95"] Jan 29 16:27:40 crc kubenswrapper[4714]: E0129 16:27:40.863948 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e03982-d953-488f-a01a-5024f64ad7da" containerName="keystone-bootstrap" Jan 29 16:27:40 crc kubenswrapper[4714]: I0129 16:27:40.863964 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e03982-d953-488f-a01a-5024f64ad7da" containerName="keystone-bootstrap" Jan 29 16:27:40 crc kubenswrapper[4714]: I0129 16:27:40.864085 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3e03982-d953-488f-a01a-5024f64ad7da" containerName="keystone-bootstrap" Jan 29 16:27:40 crc kubenswrapper[4714]: I0129 16:27:40.864515 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db9b49999-6gd95" Jan 29 16:27:40 crc kubenswrapper[4714]: I0129 16:27:40.866386 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-scripts" Jan 29 16:27:40 crc kubenswrapper[4714]: I0129 16:27:40.866491 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-keystone-dockercfg-6m228" Jan 29 16:27:40 crc kubenswrapper[4714]: I0129 16:27:40.866496 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone" Jan 29 16:27:40 crc kubenswrapper[4714]: I0129 16:27:40.867197 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-config-data" Jan 29 16:27:40 crc kubenswrapper[4714]: I0129 16:27:40.884596 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-db9b49999-6gd95"] Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.036589 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc06a535-6f60-438e-b52d-5dc90fae8c67-scripts\") pod \"keystone-db9b49999-6gd95\" (UID: \"fc06a535-6f60-438e-b52d-5dc90fae8c67\") " pod="cinder-kuttl-tests/keystone-db9b49999-6gd95" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.036692 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fc06a535-6f60-438e-b52d-5dc90fae8c67-fernet-keys\") pod \"keystone-db9b49999-6gd95\" (UID: \"fc06a535-6f60-438e-b52d-5dc90fae8c67\") " pod="cinder-kuttl-tests/keystone-db9b49999-6gd95" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.036729 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fc06a535-6f60-438e-b52d-5dc90fae8c67-credential-keys\") pod \"keystone-db9b49999-6gd95\" (UID: \"fc06a535-6f60-438e-b52d-5dc90fae8c67\") " pod="cinder-kuttl-tests/keystone-db9b49999-6gd95" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.036761 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2vn9\" (UniqueName: \"kubernetes.io/projected/fc06a535-6f60-438e-b52d-5dc90fae8c67-kube-api-access-c2vn9\") pod \"keystone-db9b49999-6gd95\" (UID: \"fc06a535-6f60-438e-b52d-5dc90fae8c67\") " pod="cinder-kuttl-tests/keystone-db9b49999-6gd95" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.036823 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc06a535-6f60-438e-b52d-5dc90fae8c67-config-data\") pod \"keystone-db9b49999-6gd95\" (UID: \"fc06a535-6f60-438e-b52d-5dc90fae8c67\") " pod="cinder-kuttl-tests/keystone-db9b49999-6gd95" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.138536 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc06a535-6f60-438e-b52d-5dc90fae8c67-config-data\") pod \"keystone-db9b49999-6gd95\" (UID: \"fc06a535-6f60-438e-b52d-5dc90fae8c67\") " pod="cinder-kuttl-tests/keystone-db9b49999-6gd95" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.138606 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc06a535-6f60-438e-b52d-5dc90fae8c67-scripts\") pod \"keystone-db9b49999-6gd95\" (UID: \"fc06a535-6f60-438e-b52d-5dc90fae8c67\") " pod="cinder-kuttl-tests/keystone-db9b49999-6gd95" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.138674 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fc06a535-6f60-438e-b52d-5dc90fae8c67-fernet-keys\") pod \"keystone-db9b49999-6gd95\" (UID: \"fc06a535-6f60-438e-b52d-5dc90fae8c67\") " pod="cinder-kuttl-tests/keystone-db9b49999-6gd95" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.138714 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fc06a535-6f60-438e-b52d-5dc90fae8c67-credential-keys\") pod \"keystone-db9b49999-6gd95\" (UID: \"fc06a535-6f60-438e-b52d-5dc90fae8c67\") " pod="cinder-kuttl-tests/keystone-db9b49999-6gd95" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.138744 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2vn9\" (UniqueName: \"kubernetes.io/projected/fc06a535-6f60-438e-b52d-5dc90fae8c67-kube-api-access-c2vn9\") pod \"keystone-db9b49999-6gd95\" (UID: \"fc06a535-6f60-438e-b52d-5dc90fae8c67\") " pod="cinder-kuttl-tests/keystone-db9b49999-6gd95" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.144107 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fc06a535-6f60-438e-b52d-5dc90fae8c67-fernet-keys\") pod \"keystone-db9b49999-6gd95\" (UID: \"fc06a535-6f60-438e-b52d-5dc90fae8c67\") " pod="cinder-kuttl-tests/keystone-db9b49999-6gd95" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.144125 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc06a535-6f60-438e-b52d-5dc90fae8c67-config-data\") pod \"keystone-db9b49999-6gd95\" (UID: \"fc06a535-6f60-438e-b52d-5dc90fae8c67\") " pod="cinder-kuttl-tests/keystone-db9b49999-6gd95" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.144398 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc06a535-6f60-438e-b52d-5dc90fae8c67-scripts\") pod \"keystone-db9b49999-6gd95\" (UID: \"fc06a535-6f60-438e-b52d-5dc90fae8c67\") " pod="cinder-kuttl-tests/keystone-db9b49999-6gd95" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.144832 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fc06a535-6f60-438e-b52d-5dc90fae8c67-credential-keys\") pod \"keystone-db9b49999-6gd95\" (UID: \"fc06a535-6f60-438e-b52d-5dc90fae8c67\") " pod="cinder-kuttl-tests/keystone-db9b49999-6gd95" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.161181 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2vn9\" (UniqueName: \"kubernetes.io/projected/fc06a535-6f60-438e-b52d-5dc90fae8c67-kube-api-access-c2vn9\") pod \"keystone-db9b49999-6gd95\" (UID: \"fc06a535-6f60-438e-b52d-5dc90fae8c67\") " pod="cinder-kuttl-tests/keystone-db9b49999-6gd95" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.182176 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db9b49999-6gd95" Jan 29 16:27:41 crc kubenswrapper[4714]: E0129 16:27:41.185372 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.472098 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5fc6d4b6f5-9mdcs"] Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.472909 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5fc6d4b6f5-9mdcs" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.475172 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-service-cert" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.475200 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-72wx2" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.495820 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5fc6d4b6f5-9mdcs"] Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.643966 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9aa31790-7a3c-4a66-aace-c087c0221c6b-webhook-cert\") pod \"cinder-operator-controller-manager-5fc6d4b6f5-9mdcs\" (UID: \"9aa31790-7a3c-4a66-aace-c087c0221c6b\") " pod="openstack-operators/cinder-operator-controller-manager-5fc6d4b6f5-9mdcs" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.644015 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4dvn\" (UniqueName: \"kubernetes.io/projected/9aa31790-7a3c-4a66-aace-c087c0221c6b-kube-api-access-t4dvn\") pod \"cinder-operator-controller-manager-5fc6d4b6f5-9mdcs\" (UID: \"9aa31790-7a3c-4a66-aace-c087c0221c6b\") " pod="openstack-operators/cinder-operator-controller-manager-5fc6d4b6f5-9mdcs" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.644062 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9aa31790-7a3c-4a66-aace-c087c0221c6b-apiservice-cert\") pod \"cinder-operator-controller-manager-5fc6d4b6f5-9mdcs\" (UID: \"9aa31790-7a3c-4a66-aace-c087c0221c6b\") " pod="openstack-operators/cinder-operator-controller-manager-5fc6d4b6f5-9mdcs" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.647497 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-db9b49999-6gd95"] Jan 29 16:27:41 crc kubenswrapper[4714]: W0129 16:27:41.648476 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc06a535_6f60_438e_b52d_5dc90fae8c67.slice/crio-a35aebf0427b3f34153f6b20222d8016725e500d95a3072063dc0d02bd8d902e WatchSource:0}: Error finding container a35aebf0427b3f34153f6b20222d8016725e500d95a3072063dc0d02bd8d902e: Status 404 returned error can't find the container with id a35aebf0427b3f34153f6b20222d8016725e500d95a3072063dc0d02bd8d902e Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.705404 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db9b49999-6gd95" event={"ID":"fc06a535-6f60-438e-b52d-5dc90fae8c67","Type":"ContainerStarted","Data":"a35aebf0427b3f34153f6b20222d8016725e500d95a3072063dc0d02bd8d902e"} Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.744984 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9aa31790-7a3c-4a66-aace-c087c0221c6b-webhook-cert\") pod \"cinder-operator-controller-manager-5fc6d4b6f5-9mdcs\" (UID: \"9aa31790-7a3c-4a66-aace-c087c0221c6b\") " pod="openstack-operators/cinder-operator-controller-manager-5fc6d4b6f5-9mdcs" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.745034 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4dvn\" (UniqueName: \"kubernetes.io/projected/9aa31790-7a3c-4a66-aace-c087c0221c6b-kube-api-access-t4dvn\") pod \"cinder-operator-controller-manager-5fc6d4b6f5-9mdcs\" (UID: \"9aa31790-7a3c-4a66-aace-c087c0221c6b\") " pod="openstack-operators/cinder-operator-controller-manager-5fc6d4b6f5-9mdcs" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.745073 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9aa31790-7a3c-4a66-aace-c087c0221c6b-apiservice-cert\") pod \"cinder-operator-controller-manager-5fc6d4b6f5-9mdcs\" (UID: \"9aa31790-7a3c-4a66-aace-c087c0221c6b\") " pod="openstack-operators/cinder-operator-controller-manager-5fc6d4b6f5-9mdcs" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.749804 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9aa31790-7a3c-4a66-aace-c087c0221c6b-webhook-cert\") pod \"cinder-operator-controller-manager-5fc6d4b6f5-9mdcs\" (UID: \"9aa31790-7a3c-4a66-aace-c087c0221c6b\") " pod="openstack-operators/cinder-operator-controller-manager-5fc6d4b6f5-9mdcs" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.753586 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9aa31790-7a3c-4a66-aace-c087c0221c6b-apiservice-cert\") pod \"cinder-operator-controller-manager-5fc6d4b6f5-9mdcs\" (UID: \"9aa31790-7a3c-4a66-aace-c087c0221c6b\") " pod="openstack-operators/cinder-operator-controller-manager-5fc6d4b6f5-9mdcs" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.764680 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4dvn\" (UniqueName: \"kubernetes.io/projected/9aa31790-7a3c-4a66-aace-c087c0221c6b-kube-api-access-t4dvn\") pod \"cinder-operator-controller-manager-5fc6d4b6f5-9mdcs\" (UID: \"9aa31790-7a3c-4a66-aace-c087c0221c6b\") " pod="openstack-operators/cinder-operator-controller-manager-5fc6d4b6f5-9mdcs" Jan 29 16:27:41 crc kubenswrapper[4714]: I0129 16:27:41.795655 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5fc6d4b6f5-9mdcs" Jan 29 16:27:42 crc kubenswrapper[4714]: I0129 16:27:42.238622 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5fc6d4b6f5-9mdcs"] Jan 29 16:27:42 crc kubenswrapper[4714]: W0129 16:27:42.243561 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9aa31790_7a3c_4a66_aace_c087c0221c6b.slice/crio-c82d440d4556664aec8a776ead06b3d925538dac3151f1c3b85d4cf089d48d43 WatchSource:0}: Error finding container c82d440d4556664aec8a776ead06b3d925538dac3151f1c3b85d4cf089d48d43: Status 404 returned error can't find the container with id c82d440d4556664aec8a776ead06b3d925538dac3151f1c3b85d4cf089d48d43 Jan 29 16:27:42 crc kubenswrapper[4714]: I0129 16:27:42.714635 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5fc6d4b6f5-9mdcs" event={"ID":"9aa31790-7a3c-4a66-aace-c087c0221c6b","Type":"ContainerStarted","Data":"c82d440d4556664aec8a776ead06b3d925538dac3151f1c3b85d4cf089d48d43"} Jan 29 16:27:42 crc kubenswrapper[4714]: I0129 16:27:42.716488 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db9b49999-6gd95" event={"ID":"fc06a535-6f60-438e-b52d-5dc90fae8c67","Type":"ContainerStarted","Data":"4adeb441736bb7d27c549844b9147979d27f5dbed7e42a9b990619d275a00000"} Jan 29 16:27:42 crc kubenswrapper[4714]: I0129 16:27:42.716942 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/keystone-db9b49999-6gd95" Jan 29 16:27:42 crc kubenswrapper[4714]: I0129 16:27:42.742143 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/keystone-db9b49999-6gd95" podStartSLOduration=2.742116393 podStartE2EDuration="2.742116393s" podCreationTimestamp="2026-01-29 16:27:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:27:42.733188242 +0000 UTC m=+1069.253689362" watchObservedRunningTime="2026-01-29 16:27:42.742116393 +0000 UTC m=+1069.262617513" Jan 29 16:27:45 crc kubenswrapper[4714]: I0129 16:27:45.748701 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5fc6d4b6f5-9mdcs" event={"ID":"9aa31790-7a3c-4a66-aace-c087c0221c6b","Type":"ContainerStarted","Data":"36513d8368599e830d1d92fc041ab07bd030a4ddf286f084b21d04377e2a5dbd"} Jan 29 16:27:45 crc kubenswrapper[4714]: I0129 16:27:45.749604 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5fc6d4b6f5-9mdcs" Jan 29 16:27:45 crc kubenswrapper[4714]: I0129 16:27:45.765324 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5fc6d4b6f5-9mdcs" podStartSLOduration=2.200486921 podStartE2EDuration="4.765307912s" podCreationTimestamp="2026-01-29 16:27:41 +0000 UTC" firstStartedPulling="2026-01-29 16:27:42.245788463 +0000 UTC m=+1068.766289583" lastFinishedPulling="2026-01-29 16:27:44.810609454 +0000 UTC m=+1071.331110574" observedRunningTime="2026-01-29 16:27:45.761414859 +0000 UTC m=+1072.281915979" watchObservedRunningTime="2026-01-29 16:27:45.765307912 +0000 UTC m=+1072.285809022" Jan 29 16:27:51 crc kubenswrapper[4714]: I0129 16:27:51.800173 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5fc6d4b6f5-9mdcs" Jan 29 16:27:52 crc kubenswrapper[4714]: E0129 16:27:52.186568 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:27:55 crc kubenswrapper[4714]: I0129 16:27:55.163894 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-db-create-64s46"] Jan 29 16:27:55 crc kubenswrapper[4714]: I0129 16:27:55.164934 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-64s46" Jan 29 16:27:55 crc kubenswrapper[4714]: I0129 16:27:55.170025 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-9de3-account-create-update-79889"] Jan 29 16:27:55 crc kubenswrapper[4714]: I0129 16:27:55.170828 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-9de3-account-create-update-79889" Jan 29 16:27:55 crc kubenswrapper[4714]: I0129 16:27:55.172732 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-db-secret" Jan 29 16:27:55 crc kubenswrapper[4714]: I0129 16:27:55.177592 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-64s46"] Jan 29 16:27:55 crc kubenswrapper[4714]: I0129 16:27:55.184461 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-9de3-account-create-update-79889"] Jan 29 16:27:55 crc kubenswrapper[4714]: I0129 16:27:55.233436 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rznp\" (UniqueName: \"kubernetes.io/projected/e670557b-650e-478c-9f87-eaba6641f02f-kube-api-access-4rznp\") pod \"cinder-9de3-account-create-update-79889\" (UID: \"e670557b-650e-478c-9f87-eaba6641f02f\") " pod="cinder-kuttl-tests/cinder-9de3-account-create-update-79889" Jan 29 16:27:55 crc kubenswrapper[4714]: I0129 16:27:55.233498 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c747693b-f2e9-4073-9432-115643a6b6d1-operator-scripts\") pod \"cinder-db-create-64s46\" (UID: \"c747693b-f2e9-4073-9432-115643a6b6d1\") " pod="cinder-kuttl-tests/cinder-db-create-64s46" Jan 29 16:27:55 crc kubenswrapper[4714]: I0129 16:27:55.233596 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e670557b-650e-478c-9f87-eaba6641f02f-operator-scripts\") pod \"cinder-9de3-account-create-update-79889\" (UID: \"e670557b-650e-478c-9f87-eaba6641f02f\") " pod="cinder-kuttl-tests/cinder-9de3-account-create-update-79889" Jan 29 16:27:55 crc kubenswrapper[4714]: I0129 16:27:55.233627 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxhv6\" (UniqueName: \"kubernetes.io/projected/c747693b-f2e9-4073-9432-115643a6b6d1-kube-api-access-fxhv6\") pod \"cinder-db-create-64s46\" (UID: \"c747693b-f2e9-4073-9432-115643a6b6d1\") " pod="cinder-kuttl-tests/cinder-db-create-64s46" Jan 29 16:27:55 crc kubenswrapper[4714]: I0129 16:27:55.334793 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rznp\" (UniqueName: \"kubernetes.io/projected/e670557b-650e-478c-9f87-eaba6641f02f-kube-api-access-4rznp\") pod \"cinder-9de3-account-create-update-79889\" (UID: \"e670557b-650e-478c-9f87-eaba6641f02f\") " pod="cinder-kuttl-tests/cinder-9de3-account-create-update-79889" Jan 29 16:27:55 crc kubenswrapper[4714]: I0129 16:27:55.335248 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c747693b-f2e9-4073-9432-115643a6b6d1-operator-scripts\") pod \"cinder-db-create-64s46\" (UID: \"c747693b-f2e9-4073-9432-115643a6b6d1\") " pod="cinder-kuttl-tests/cinder-db-create-64s46" Jan 29 16:27:55 crc kubenswrapper[4714]: I0129 16:27:55.335911 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c747693b-f2e9-4073-9432-115643a6b6d1-operator-scripts\") pod \"cinder-db-create-64s46\" (UID: \"c747693b-f2e9-4073-9432-115643a6b6d1\") " pod="cinder-kuttl-tests/cinder-db-create-64s46" Jan 29 16:27:55 crc kubenswrapper[4714]: I0129 16:27:55.336053 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e670557b-650e-478c-9f87-eaba6641f02f-operator-scripts\") pod \"cinder-9de3-account-create-update-79889\" (UID: \"e670557b-650e-478c-9f87-eaba6641f02f\") " pod="cinder-kuttl-tests/cinder-9de3-account-create-update-79889" Jan 29 16:27:55 crc kubenswrapper[4714]: I0129 16:27:55.336684 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e670557b-650e-478c-9f87-eaba6641f02f-operator-scripts\") pod \"cinder-9de3-account-create-update-79889\" (UID: \"e670557b-650e-478c-9f87-eaba6641f02f\") " pod="cinder-kuttl-tests/cinder-9de3-account-create-update-79889" Jan 29 16:27:55 crc kubenswrapper[4714]: I0129 16:27:55.336751 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxhv6\" (UniqueName: \"kubernetes.io/projected/c747693b-f2e9-4073-9432-115643a6b6d1-kube-api-access-fxhv6\") pod \"cinder-db-create-64s46\" (UID: \"c747693b-f2e9-4073-9432-115643a6b6d1\") " pod="cinder-kuttl-tests/cinder-db-create-64s46" Jan 29 16:27:55 crc kubenswrapper[4714]: I0129 16:27:55.359737 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rznp\" (UniqueName: \"kubernetes.io/projected/e670557b-650e-478c-9f87-eaba6641f02f-kube-api-access-4rznp\") pod \"cinder-9de3-account-create-update-79889\" (UID: \"e670557b-650e-478c-9f87-eaba6641f02f\") " pod="cinder-kuttl-tests/cinder-9de3-account-create-update-79889" Jan 29 16:27:55 crc kubenswrapper[4714]: I0129 16:27:55.376498 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxhv6\" (UniqueName: \"kubernetes.io/projected/c747693b-f2e9-4073-9432-115643a6b6d1-kube-api-access-fxhv6\") pod \"cinder-db-create-64s46\" (UID: \"c747693b-f2e9-4073-9432-115643a6b6d1\") " pod="cinder-kuttl-tests/cinder-db-create-64s46" Jan 29 16:27:55 crc kubenswrapper[4714]: I0129 16:27:55.481701 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-64s46" Jan 29 16:27:55 crc kubenswrapper[4714]: I0129 16:27:55.500435 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-9de3-account-create-update-79889" Jan 29 16:27:56 crc kubenswrapper[4714]: W0129 16:27:56.008208 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc747693b_f2e9_4073_9432_115643a6b6d1.slice/crio-ef09ca095d654272ce164d737756bdd07a31ec4b52f6796143d33c3b42a8fa27 WatchSource:0}: Error finding container ef09ca095d654272ce164d737756bdd07a31ec4b52f6796143d33c3b42a8fa27: Status 404 returned error can't find the container with id ef09ca095d654272ce164d737756bdd07a31ec4b52f6796143d33c3b42a8fa27 Jan 29 16:27:56 crc kubenswrapper[4714]: I0129 16:27:56.010465 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-64s46"] Jan 29 16:27:56 crc kubenswrapper[4714]: I0129 16:27:56.049747 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-9de3-account-create-update-79889"] Jan 29 16:27:56 crc kubenswrapper[4714]: W0129 16:27:56.050716 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode670557b_650e_478c_9f87_eaba6641f02f.slice/crio-326a8c0db24835df9017b544cd8815b14dee4cd8df1be6f526c72cc4768020f6 WatchSource:0}: Error finding container 326a8c0db24835df9017b544cd8815b14dee4cd8df1be6f526c72cc4768020f6: Status 404 returned error can't find the container with id 326a8c0db24835df9017b544cd8815b14dee4cd8df1be6f526c72cc4768020f6 Jan 29 16:27:56 crc kubenswrapper[4714]: I0129 16:27:56.840621 4714 generic.go:334] "Generic (PLEG): container finished" podID="c747693b-f2e9-4073-9432-115643a6b6d1" containerID="cd8982aadd49edb0050578e4754c053be8b4e593ef390dafa6884ec2cec1fd5d" exitCode=0 Jan 29 16:27:56 crc kubenswrapper[4714]: I0129 16:27:56.840674 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-create-64s46" event={"ID":"c747693b-f2e9-4073-9432-115643a6b6d1","Type":"ContainerDied","Data":"cd8982aadd49edb0050578e4754c053be8b4e593ef390dafa6884ec2cec1fd5d"} Jan 29 16:27:56 crc kubenswrapper[4714]: I0129 16:27:56.841003 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-create-64s46" event={"ID":"c747693b-f2e9-4073-9432-115643a6b6d1","Type":"ContainerStarted","Data":"ef09ca095d654272ce164d737756bdd07a31ec4b52f6796143d33c3b42a8fa27"} Jan 29 16:27:56 crc kubenswrapper[4714]: I0129 16:27:56.842472 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-9de3-account-create-update-79889" event={"ID":"e670557b-650e-478c-9f87-eaba6641f02f","Type":"ContainerStarted","Data":"bc9178e686ab88b7e47825dd5faad2c6f1b972c479a40bdd9847878e376e9b8c"} Jan 29 16:27:56 crc kubenswrapper[4714]: I0129 16:27:56.842545 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-9de3-account-create-update-79889" event={"ID":"e670557b-650e-478c-9f87-eaba6641f02f","Type":"ContainerStarted","Data":"326a8c0db24835df9017b544cd8815b14dee4cd8df1be6f526c72cc4768020f6"} Jan 29 16:27:56 crc kubenswrapper[4714]: I0129 16:27:56.894591 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-9de3-account-create-update-79889" podStartSLOduration=1.894569857 podStartE2EDuration="1.894569857s" podCreationTimestamp="2026-01-29 16:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:27:56.889322564 +0000 UTC m=+1083.409823684" watchObservedRunningTime="2026-01-29 16:27:56.894569857 +0000 UTC m=+1083.415070977" Jan 29 16:27:57 crc kubenswrapper[4714]: I0129 16:27:57.850605 4714 generic.go:334] "Generic (PLEG): container finished" podID="e670557b-650e-478c-9f87-eaba6641f02f" containerID="bc9178e686ab88b7e47825dd5faad2c6f1b972c479a40bdd9847878e376e9b8c" exitCode=0 Jan 29 16:27:57 crc kubenswrapper[4714]: I0129 16:27:57.850698 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-9de3-account-create-update-79889" event={"ID":"e670557b-650e-478c-9f87-eaba6641f02f","Type":"ContainerDied","Data":"bc9178e686ab88b7e47825dd5faad2c6f1b972c479a40bdd9847878e376e9b8c"} Jan 29 16:27:58 crc kubenswrapper[4714]: I0129 16:27:58.214235 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-64s46" Jan 29 16:27:58 crc kubenswrapper[4714]: I0129 16:27:58.376486 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxhv6\" (UniqueName: \"kubernetes.io/projected/c747693b-f2e9-4073-9432-115643a6b6d1-kube-api-access-fxhv6\") pod \"c747693b-f2e9-4073-9432-115643a6b6d1\" (UID: \"c747693b-f2e9-4073-9432-115643a6b6d1\") " Jan 29 16:27:58 crc kubenswrapper[4714]: I0129 16:27:58.376533 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c747693b-f2e9-4073-9432-115643a6b6d1-operator-scripts\") pod \"c747693b-f2e9-4073-9432-115643a6b6d1\" (UID: \"c747693b-f2e9-4073-9432-115643a6b6d1\") " Jan 29 16:27:58 crc kubenswrapper[4714]: I0129 16:27:58.377622 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c747693b-f2e9-4073-9432-115643a6b6d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c747693b-f2e9-4073-9432-115643a6b6d1" (UID: "c747693b-f2e9-4073-9432-115643a6b6d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:27:58 crc kubenswrapper[4714]: I0129 16:27:58.385952 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c747693b-f2e9-4073-9432-115643a6b6d1-kube-api-access-fxhv6" (OuterVolumeSpecName: "kube-api-access-fxhv6") pod "c747693b-f2e9-4073-9432-115643a6b6d1" (UID: "c747693b-f2e9-4073-9432-115643a6b6d1"). InnerVolumeSpecName "kube-api-access-fxhv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:27:58 crc kubenswrapper[4714]: I0129 16:27:58.478576 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxhv6\" (UniqueName: \"kubernetes.io/projected/c747693b-f2e9-4073-9432-115643a6b6d1-kube-api-access-fxhv6\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:58 crc kubenswrapper[4714]: I0129 16:27:58.478614 4714 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c747693b-f2e9-4073-9432-115643a6b6d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:58 crc kubenswrapper[4714]: I0129 16:27:58.858548 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-64s46" Jan 29 16:27:58 crc kubenswrapper[4714]: I0129 16:27:58.858548 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-create-64s46" event={"ID":"c747693b-f2e9-4073-9432-115643a6b6d1","Type":"ContainerDied","Data":"ef09ca095d654272ce164d737756bdd07a31ec4b52f6796143d33c3b42a8fa27"} Jan 29 16:27:58 crc kubenswrapper[4714]: I0129 16:27:58.858972 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef09ca095d654272ce164d737756bdd07a31ec4b52f6796143d33c3b42a8fa27" Jan 29 16:27:59 crc kubenswrapper[4714]: I0129 16:27:59.134146 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-9de3-account-create-update-79889" Jan 29 16:27:59 crc kubenswrapper[4714]: I0129 16:27:59.287396 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e670557b-650e-478c-9f87-eaba6641f02f-operator-scripts\") pod \"e670557b-650e-478c-9f87-eaba6641f02f\" (UID: \"e670557b-650e-478c-9f87-eaba6641f02f\") " Jan 29 16:27:59 crc kubenswrapper[4714]: I0129 16:27:59.287504 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rznp\" (UniqueName: \"kubernetes.io/projected/e670557b-650e-478c-9f87-eaba6641f02f-kube-api-access-4rznp\") pod \"e670557b-650e-478c-9f87-eaba6641f02f\" (UID: \"e670557b-650e-478c-9f87-eaba6641f02f\") " Jan 29 16:27:59 crc kubenswrapper[4714]: I0129 16:27:59.288470 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e670557b-650e-478c-9f87-eaba6641f02f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e670557b-650e-478c-9f87-eaba6641f02f" (UID: "e670557b-650e-478c-9f87-eaba6641f02f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:27:59 crc kubenswrapper[4714]: I0129 16:27:59.298088 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e670557b-650e-478c-9f87-eaba6641f02f-kube-api-access-4rznp" (OuterVolumeSpecName: "kube-api-access-4rznp") pod "e670557b-650e-478c-9f87-eaba6641f02f" (UID: "e670557b-650e-478c-9f87-eaba6641f02f"). InnerVolumeSpecName "kube-api-access-4rznp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:27:59 crc kubenswrapper[4714]: I0129 16:27:59.389619 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rznp\" (UniqueName: \"kubernetes.io/projected/e670557b-650e-478c-9f87-eaba6641f02f-kube-api-access-4rznp\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:59 crc kubenswrapper[4714]: I0129 16:27:59.389896 4714 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e670557b-650e-478c-9f87-eaba6641f02f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:59 crc kubenswrapper[4714]: I0129 16:27:59.866798 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-9de3-account-create-update-79889" event={"ID":"e670557b-650e-478c-9f87-eaba6641f02f","Type":"ContainerDied","Data":"326a8c0db24835df9017b544cd8815b14dee4cd8df1be6f526c72cc4768020f6"} Jan 29 16:27:59 crc kubenswrapper[4714]: I0129 16:27:59.866839 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="326a8c0db24835df9017b544cd8815b14dee4cd8df1be6f526c72cc4768020f6" Jan 29 16:27:59 crc kubenswrapper[4714]: I0129 16:27:59.866866 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-9de3-account-create-update-79889" Jan 29 16:28:00 crc kubenswrapper[4714]: I0129 16:28:00.508137 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-9pvrg"] Jan 29 16:28:00 crc kubenswrapper[4714]: E0129 16:28:00.508432 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c747693b-f2e9-4073-9432-115643a6b6d1" containerName="mariadb-database-create" Jan 29 16:28:00 crc kubenswrapper[4714]: I0129 16:28:00.508454 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="c747693b-f2e9-4073-9432-115643a6b6d1" containerName="mariadb-database-create" Jan 29 16:28:00 crc kubenswrapper[4714]: E0129 16:28:00.508482 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e670557b-650e-478c-9f87-eaba6641f02f" containerName="mariadb-account-create-update" Jan 29 16:28:00 crc kubenswrapper[4714]: I0129 16:28:00.508492 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="e670557b-650e-478c-9f87-eaba6641f02f" containerName="mariadb-account-create-update" Jan 29 16:28:00 crc kubenswrapper[4714]: I0129 16:28:00.508631 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="e670557b-650e-478c-9f87-eaba6641f02f" containerName="mariadb-account-create-update" Jan 29 16:28:00 crc kubenswrapper[4714]: I0129 16:28:00.508645 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="c747693b-f2e9-4073-9432-115643a6b6d1" containerName="mariadb-database-create" Jan 29 16:28:00 crc kubenswrapper[4714]: I0129 16:28:00.509127 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-9pvrg" Jan 29 16:28:00 crc kubenswrapper[4714]: I0129 16:28:00.514877 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-cinder-dockercfg-xw2vq" Jan 29 16:28:00 crc kubenswrapper[4714]: I0129 16:28:00.515154 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-scripts" Jan 29 16:28:00 crc kubenswrapper[4714]: I0129 16:28:00.515452 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-config-data" Jan 29 16:28:00 crc kubenswrapper[4714]: I0129 16:28:00.531365 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-9pvrg"] Jan 29 16:28:00 crc kubenswrapper[4714]: I0129 16:28:00.606277 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcnsl\" (UniqueName: \"kubernetes.io/projected/f09789c2-52ed-4321-95f8-02c3b3f271e3-kube-api-access-hcnsl\") pod \"cinder-db-sync-9pvrg\" (UID: \"f09789c2-52ed-4321-95f8-02c3b3f271e3\") " pod="cinder-kuttl-tests/cinder-db-sync-9pvrg" Jan 29 16:28:00 crc kubenswrapper[4714]: I0129 16:28:00.606330 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f09789c2-52ed-4321-95f8-02c3b3f271e3-etc-machine-id\") pod \"cinder-db-sync-9pvrg\" (UID: \"f09789c2-52ed-4321-95f8-02c3b3f271e3\") " pod="cinder-kuttl-tests/cinder-db-sync-9pvrg" Jan 29 16:28:00 crc kubenswrapper[4714]: I0129 16:28:00.606436 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f09789c2-52ed-4321-95f8-02c3b3f271e3-config-data\") pod \"cinder-db-sync-9pvrg\" (UID: \"f09789c2-52ed-4321-95f8-02c3b3f271e3\") " pod="cinder-kuttl-tests/cinder-db-sync-9pvrg" Jan 29 16:28:00 crc kubenswrapper[4714]: I0129 16:28:00.606484 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f09789c2-52ed-4321-95f8-02c3b3f271e3-scripts\") pod \"cinder-db-sync-9pvrg\" (UID: \"f09789c2-52ed-4321-95f8-02c3b3f271e3\") " pod="cinder-kuttl-tests/cinder-db-sync-9pvrg" Jan 29 16:28:00 crc kubenswrapper[4714]: I0129 16:28:00.606661 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f09789c2-52ed-4321-95f8-02c3b3f271e3-db-sync-config-data\") pod \"cinder-db-sync-9pvrg\" (UID: \"f09789c2-52ed-4321-95f8-02c3b3f271e3\") " pod="cinder-kuttl-tests/cinder-db-sync-9pvrg" Jan 29 16:28:00 crc kubenswrapper[4714]: I0129 16:28:00.707892 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcnsl\" (UniqueName: \"kubernetes.io/projected/f09789c2-52ed-4321-95f8-02c3b3f271e3-kube-api-access-hcnsl\") pod \"cinder-db-sync-9pvrg\" (UID: \"f09789c2-52ed-4321-95f8-02c3b3f271e3\") " pod="cinder-kuttl-tests/cinder-db-sync-9pvrg" Jan 29 16:28:00 crc kubenswrapper[4714]: I0129 16:28:00.708006 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f09789c2-52ed-4321-95f8-02c3b3f271e3-etc-machine-id\") pod \"cinder-db-sync-9pvrg\" (UID: \"f09789c2-52ed-4321-95f8-02c3b3f271e3\") " pod="cinder-kuttl-tests/cinder-db-sync-9pvrg" Jan 29 16:28:00 crc kubenswrapper[4714]: I0129 16:28:00.708058 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f09789c2-52ed-4321-95f8-02c3b3f271e3-config-data\") pod \"cinder-db-sync-9pvrg\" (UID: \"f09789c2-52ed-4321-95f8-02c3b3f271e3\") " pod="cinder-kuttl-tests/cinder-db-sync-9pvrg" Jan 29 16:28:00 crc kubenswrapper[4714]: I0129 16:28:00.708085 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f09789c2-52ed-4321-95f8-02c3b3f271e3-scripts\") pod \"cinder-db-sync-9pvrg\" (UID: \"f09789c2-52ed-4321-95f8-02c3b3f271e3\") " pod="cinder-kuttl-tests/cinder-db-sync-9pvrg" Jan 29 16:28:00 crc kubenswrapper[4714]: I0129 16:28:00.708151 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f09789c2-52ed-4321-95f8-02c3b3f271e3-db-sync-config-data\") pod \"cinder-db-sync-9pvrg\" (UID: \"f09789c2-52ed-4321-95f8-02c3b3f271e3\") " pod="cinder-kuttl-tests/cinder-db-sync-9pvrg" Jan 29 16:28:00 crc kubenswrapper[4714]: I0129 16:28:00.708084 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f09789c2-52ed-4321-95f8-02c3b3f271e3-etc-machine-id\") pod \"cinder-db-sync-9pvrg\" (UID: \"f09789c2-52ed-4321-95f8-02c3b3f271e3\") " pod="cinder-kuttl-tests/cinder-db-sync-9pvrg" Jan 29 16:28:00 crc kubenswrapper[4714]: I0129 16:28:00.725720 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f09789c2-52ed-4321-95f8-02c3b3f271e3-scripts\") pod \"cinder-db-sync-9pvrg\" (UID: \"f09789c2-52ed-4321-95f8-02c3b3f271e3\") " pod="cinder-kuttl-tests/cinder-db-sync-9pvrg" Jan 29 16:28:00 crc kubenswrapper[4714]: I0129 16:28:00.726857 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f09789c2-52ed-4321-95f8-02c3b3f271e3-config-data\") pod \"cinder-db-sync-9pvrg\" (UID: \"f09789c2-52ed-4321-95f8-02c3b3f271e3\") " pod="cinder-kuttl-tests/cinder-db-sync-9pvrg" Jan 29 16:28:00 crc kubenswrapper[4714]: I0129 16:28:00.727140 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f09789c2-52ed-4321-95f8-02c3b3f271e3-db-sync-config-data\") pod \"cinder-db-sync-9pvrg\" (UID: \"f09789c2-52ed-4321-95f8-02c3b3f271e3\") " pod="cinder-kuttl-tests/cinder-db-sync-9pvrg" Jan 29 16:28:00 crc kubenswrapper[4714]: I0129 16:28:00.730401 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcnsl\" (UniqueName: \"kubernetes.io/projected/f09789c2-52ed-4321-95f8-02c3b3f271e3-kube-api-access-hcnsl\") pod \"cinder-db-sync-9pvrg\" (UID: \"f09789c2-52ed-4321-95f8-02c3b3f271e3\") " pod="cinder-kuttl-tests/cinder-db-sync-9pvrg" Jan 29 16:28:00 crc kubenswrapper[4714]: I0129 16:28:00.840603 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-9pvrg" Jan 29 16:28:01 crc kubenswrapper[4714]: I0129 16:28:01.277725 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-9pvrg"] Jan 29 16:28:01 crc kubenswrapper[4714]: I0129 16:28:01.882180 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-9pvrg" event={"ID":"f09789c2-52ed-4321-95f8-02c3b3f271e3","Type":"ContainerStarted","Data":"915808b45f9976849e0c7fd3de3f5846a7cc68f45e22deab7e1b486fb97c6590"} Jan 29 16:28:05 crc kubenswrapper[4714]: E0129 16:28:05.190414 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:28:12 crc kubenswrapper[4714]: I0129 16:28:12.694400 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/keystone-db9b49999-6gd95" Jan 29 16:28:17 crc kubenswrapper[4714]: E0129 16:28:17.752574 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:28:20 crc kubenswrapper[4714]: I0129 16:28:20.059051 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-9pvrg" event={"ID":"f09789c2-52ed-4321-95f8-02c3b3f271e3","Type":"ContainerStarted","Data":"83eeaf58ca15604fd125219fc4be09b86cbb0308fb89b0438c4ada31625917b5"} Jan 29 16:28:20 crc kubenswrapper[4714]: I0129 16:28:20.084801 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-db-sync-9pvrg" podStartSLOduration=2.552132126 podStartE2EDuration="20.0847782s" podCreationTimestamp="2026-01-29 16:28:00 +0000 UTC" firstStartedPulling="2026-01-29 16:28:01.288231275 +0000 UTC m=+1087.808732395" lastFinishedPulling="2026-01-29 16:28:18.820877329 +0000 UTC m=+1105.341378469" observedRunningTime="2026-01-29 16:28:20.079298367 +0000 UTC m=+1106.599799507" watchObservedRunningTime="2026-01-29 16:28:20.0847782 +0000 UTC m=+1106.605279330" Jan 29 16:28:27 crc kubenswrapper[4714]: I0129 16:28:27.112476 4714 generic.go:334] "Generic (PLEG): container finished" podID="f09789c2-52ed-4321-95f8-02c3b3f271e3" containerID="83eeaf58ca15604fd125219fc4be09b86cbb0308fb89b0438c4ada31625917b5" exitCode=0 Jan 29 16:28:27 crc kubenswrapper[4714]: I0129 16:28:27.112543 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-9pvrg" event={"ID":"f09789c2-52ed-4321-95f8-02c3b3f271e3","Type":"ContainerDied","Data":"83eeaf58ca15604fd125219fc4be09b86cbb0308fb89b0438c4ada31625917b5"} Jan 29 16:28:28 crc kubenswrapper[4714]: E0129 16:28:28.185980 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:28:28 crc kubenswrapper[4714]: I0129 16:28:28.379244 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-9pvrg" Jan 29 16:28:28 crc kubenswrapper[4714]: I0129 16:28:28.550997 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f09789c2-52ed-4321-95f8-02c3b3f271e3-etc-machine-id\") pod \"f09789c2-52ed-4321-95f8-02c3b3f271e3\" (UID: \"f09789c2-52ed-4321-95f8-02c3b3f271e3\") " Jan 29 16:28:28 crc kubenswrapper[4714]: I0129 16:28:28.551067 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f09789c2-52ed-4321-95f8-02c3b3f271e3-db-sync-config-data\") pod \"f09789c2-52ed-4321-95f8-02c3b3f271e3\" (UID: \"f09789c2-52ed-4321-95f8-02c3b3f271e3\") " Jan 29 16:28:28 crc kubenswrapper[4714]: I0129 16:28:28.551110 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f09789c2-52ed-4321-95f8-02c3b3f271e3-config-data\") pod \"f09789c2-52ed-4321-95f8-02c3b3f271e3\" (UID: \"f09789c2-52ed-4321-95f8-02c3b3f271e3\") " Jan 29 16:28:28 crc kubenswrapper[4714]: I0129 16:28:28.551170 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f09789c2-52ed-4321-95f8-02c3b3f271e3-scripts\") pod \"f09789c2-52ed-4321-95f8-02c3b3f271e3\" (UID: \"f09789c2-52ed-4321-95f8-02c3b3f271e3\") " Jan 29 16:28:28 crc kubenswrapper[4714]: I0129 16:28:28.551222 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f09789c2-52ed-4321-95f8-02c3b3f271e3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f09789c2-52ed-4321-95f8-02c3b3f271e3" (UID: "f09789c2-52ed-4321-95f8-02c3b3f271e3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:28:28 crc kubenswrapper[4714]: I0129 16:28:28.551262 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcnsl\" (UniqueName: \"kubernetes.io/projected/f09789c2-52ed-4321-95f8-02c3b3f271e3-kube-api-access-hcnsl\") pod \"f09789c2-52ed-4321-95f8-02c3b3f271e3\" (UID: \"f09789c2-52ed-4321-95f8-02c3b3f271e3\") " Jan 29 16:28:28 crc kubenswrapper[4714]: I0129 16:28:28.551682 4714 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f09789c2-52ed-4321-95f8-02c3b3f271e3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:28 crc kubenswrapper[4714]: I0129 16:28:28.559075 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f09789c2-52ed-4321-95f8-02c3b3f271e3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f09789c2-52ed-4321-95f8-02c3b3f271e3" (UID: "f09789c2-52ed-4321-95f8-02c3b3f271e3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:28:28 crc kubenswrapper[4714]: I0129 16:28:28.559197 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f09789c2-52ed-4321-95f8-02c3b3f271e3-scripts" (OuterVolumeSpecName: "scripts") pod "f09789c2-52ed-4321-95f8-02c3b3f271e3" (UID: "f09789c2-52ed-4321-95f8-02c3b3f271e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:28:28 crc kubenswrapper[4714]: I0129 16:28:28.562379 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f09789c2-52ed-4321-95f8-02c3b3f271e3-kube-api-access-hcnsl" (OuterVolumeSpecName: "kube-api-access-hcnsl") pod "f09789c2-52ed-4321-95f8-02c3b3f271e3" (UID: "f09789c2-52ed-4321-95f8-02c3b3f271e3"). InnerVolumeSpecName "kube-api-access-hcnsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:28:28 crc kubenswrapper[4714]: I0129 16:28:28.594611 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f09789c2-52ed-4321-95f8-02c3b3f271e3-config-data" (OuterVolumeSpecName: "config-data") pod "f09789c2-52ed-4321-95f8-02c3b3f271e3" (UID: "f09789c2-52ed-4321-95f8-02c3b3f271e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:28:28 crc kubenswrapper[4714]: I0129 16:28:28.654321 4714 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f09789c2-52ed-4321-95f8-02c3b3f271e3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:28 crc kubenswrapper[4714]: I0129 16:28:28.654417 4714 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f09789c2-52ed-4321-95f8-02c3b3f271e3-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:28 crc kubenswrapper[4714]: I0129 16:28:28.654457 4714 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f09789c2-52ed-4321-95f8-02c3b3f271e3-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:28 crc kubenswrapper[4714]: I0129 16:28:28.654481 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcnsl\" (UniqueName: \"kubernetes.io/projected/f09789c2-52ed-4321-95f8-02c3b3f271e3-kube-api-access-hcnsl\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.132335 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-9pvrg" event={"ID":"f09789c2-52ed-4321-95f8-02c3b3f271e3","Type":"ContainerDied","Data":"915808b45f9976849e0c7fd3de3f5846a7cc68f45e22deab7e1b486fb97c6590"} Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.132745 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="915808b45f9976849e0c7fd3de3f5846a7cc68f45e22deab7e1b486fb97c6590" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.132694 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-9pvrg" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.431248 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 29 16:28:29 crc kubenswrapper[4714]: E0129 16:28:29.431690 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f09789c2-52ed-4321-95f8-02c3b3f271e3" containerName="cinder-db-sync" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.431707 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09789c2-52ed-4321-95f8-02c3b3f271e3" containerName="cinder-db-sync" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.431835 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="f09789c2-52ed-4321-95f8-02c3b3f271e3" containerName="cinder-db-sync" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.432745 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.438999 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-cinder-dockercfg-xw2vq" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.439406 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-scripts" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.439603 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-scheduler-config-data" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.439822 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-config-data" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.448007 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.449165 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.453774 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-backup-config-data" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.454547 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.462858 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.572792 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.573808 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.577538 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-volume-volume1-config-data" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.578600 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-config-data\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.578641 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-lib-modules\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.578661 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-run\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.578677 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0390b29-ac12-4c76-a954-8c7236d81661-scripts\") pod \"cinder-scheduler-0\" (UID: \"c0390b29-ac12-4c76-a954-8c7236d81661\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.578692 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-scripts\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.578718 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.578746 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.578800 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-sys\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.578820 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rxqx\" (UniqueName: \"kubernetes.io/projected/c0390b29-ac12-4c76-a954-8c7236d81661-kube-api-access-6rxqx\") pod \"cinder-scheduler-0\" (UID: \"c0390b29-ac12-4c76-a954-8c7236d81661\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.578845 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.578859 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-dev\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.578875 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0390b29-ac12-4c76-a954-8c7236d81661-config-data\") pod \"cinder-scheduler-0\" (UID: \"c0390b29-ac12-4c76-a954-8c7236d81661\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.578895 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.578908 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-config-data-custom\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.578924 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5dd5\" (UniqueName: \"kubernetes.io/projected/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-kube-api-access-s5dd5\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.578975 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.579031 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0390b29-ac12-4c76-a954-8c7236d81661-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c0390b29-ac12-4c76-a954-8c7236d81661\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.579065 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-etc-nvme\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.579085 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0390b29-ac12-4c76-a954-8c7236d81661-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c0390b29-ac12-4c76-a954-8c7236d81661\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.614664 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.679945 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5dd5\" (UniqueName: \"kubernetes.io/projected/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-kube-api-access-s5dd5\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.679999 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.680036 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.680054 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.680070 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.680186 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0390b29-ac12-4c76-a954-8c7236d81661-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c0390b29-ac12-4c76-a954-8c7236d81661\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.680205 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.680230 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0390b29-ac12-4c76-a954-8c7236d81661-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c0390b29-ac12-4c76-a954-8c7236d81661\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.680297 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-etc-nvme\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.680366 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0390b29-ac12-4c76-a954-8c7236d81661-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c0390b29-ac12-4c76-a954-8c7236d81661\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.680375 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.680399 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-etc-nvme\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.680408 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.680455 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.680498 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-config-data\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.680520 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-lib-modules\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.680539 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-run\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.680562 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0390b29-ac12-4c76-a954-8c7236d81661-scripts\") pod \"cinder-scheduler-0\" (UID: \"c0390b29-ac12-4c76-a954-8c7236d81661\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.680581 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-scripts\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.680594 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-lib-modules\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.680604 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e154d80-4b79-4f74-809e-c1c274ed4063-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.680899 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-run\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.680981 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.681046 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.681102 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e154d80-4b79-4f74-809e-c1c274ed4063-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.681140 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.681154 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-sys\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.681178 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rxqx\" (UniqueName: \"kubernetes.io/projected/c0390b29-ac12-4c76-a954-8c7236d81661-kube-api-access-6rxqx\") pod \"cinder-scheduler-0\" (UID: \"c0390b29-ac12-4c76-a954-8c7236d81661\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.681193 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-sys\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.681209 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbcbm\" (UniqueName: \"kubernetes.io/projected/2e154d80-4b79-4f74-809e-c1c274ed4063-kube-api-access-dbcbm\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.681232 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.681258 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.681273 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-dev\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.681289 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-run\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.681321 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e154d80-4b79-4f74-809e-c1c274ed4063-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.681328 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.681342 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0390b29-ac12-4c76-a954-8c7236d81661-config-data\") pod \"cinder-scheduler-0\" (UID: \"c0390b29-ac12-4c76-a954-8c7236d81661\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.681359 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.681376 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-dev\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.681531 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-dev\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.681555 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-sys\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.681595 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.681610 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-config-data-custom\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.682038 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.689619 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0390b29-ac12-4c76-a954-8c7236d81661-scripts\") pod \"cinder-scheduler-0\" (UID: \"c0390b29-ac12-4c76-a954-8c7236d81661\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.690219 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0390b29-ac12-4c76-a954-8c7236d81661-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c0390b29-ac12-4c76-a954-8c7236d81661\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.693554 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-config-data-custom\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.695924 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-config-data\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.700353 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-scripts\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.701873 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0390b29-ac12-4c76-a954-8c7236d81661-config-data\") pod \"cinder-scheduler-0\" (UID: \"c0390b29-ac12-4c76-a954-8c7236d81661\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.711679 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5dd5\" (UniqueName: \"kubernetes.io/projected/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-kube-api-access-s5dd5\") pod \"cinder-backup-0\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.714890 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rxqx\" (UniqueName: \"kubernetes.io/projected/c0390b29-ac12-4c76-a954-8c7236d81661-kube-api-access-6rxqx\") pod \"cinder-scheduler-0\" (UID: \"c0390b29-ac12-4c76-a954-8c7236d81661\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.720399 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.721738 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.726296 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-api-config-data" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.737260 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.757031 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.769251 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.785151 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e154d80-4b79-4f74-809e-c1c274ed4063-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.785197 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbcbm\" (UniqueName: \"kubernetes.io/projected/2e154d80-4b79-4f74-809e-c1c274ed4063-kube-api-access-dbcbm\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.785217 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.785234 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-run\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.785254 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e154d80-4b79-4f74-809e-c1c274ed4063-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.785273 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-dev\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.785286 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-sys\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.785311 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.785334 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.785351 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.785368 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.785391 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.785406 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.785436 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e154d80-4b79-4f74-809e-c1c274ed4063-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.785779 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.785826 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.785849 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.785880 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.785969 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-sys\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.786090 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-run\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.788082 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-dev\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.788145 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.788197 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.788227 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.791377 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e154d80-4b79-4f74-809e-c1c274ed4063-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.792155 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e154d80-4b79-4f74-809e-c1c274ed4063-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.793647 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e154d80-4b79-4f74-809e-c1c274ed4063-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.819543 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbcbm\" (UniqueName: \"kubernetes.io/projected/2e154d80-4b79-4f74-809e-c1c274ed4063-kube-api-access-dbcbm\") pod \"cinder-volume-volume1-0\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.886661 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-scripts\") pod \"cinder-api-0\" (UID: \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.886960 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-logs\") pod \"cinder-api-0\" (UID: \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.886997 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.887058 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.887286 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-796r4\" (UniqueName: \"kubernetes.io/projected/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-kube-api-access-796r4\") pod \"cinder-api-0\" (UID: \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.887482 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-config-data-custom\") pod \"cinder-api-0\" (UID: \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.887522 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-config-data\") pod \"cinder-api-0\" (UID: \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.988893 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-796r4\" (UniqueName: \"kubernetes.io/projected/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-kube-api-access-796r4\") pod \"cinder-api-0\" (UID: \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.989196 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-config-data-custom\") pod \"cinder-api-0\" (UID: \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.989217 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-config-data\") pod \"cinder-api-0\" (UID: \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.989246 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-scripts\") pod \"cinder-api-0\" (UID: \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.989283 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-logs\") pod \"cinder-api-0\" (UID: \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.989302 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.989358 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.990298 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-logs\") pod \"cinder-api-0\" (UID: \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.992967 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-config-data-custom\") pod \"cinder-api-0\" (UID: \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.993350 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-scripts\") pod \"cinder-api-0\" (UID: \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:28:29 crc kubenswrapper[4714]: I0129 16:28:29.994007 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-config-data\") pod \"cinder-api-0\" (UID: \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:28:30 crc kubenswrapper[4714]: I0129 16:28:30.005255 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-796r4\" (UniqueName: \"kubernetes.io/projected/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-kube-api-access-796r4\") pod \"cinder-api-0\" (UID: \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:28:30 crc kubenswrapper[4714]: I0129 16:28:30.058063 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:28:30 crc kubenswrapper[4714]: I0129 16:28:30.152049 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 29 16:28:30 crc kubenswrapper[4714]: W0129 16:28:30.159985 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e154d80_4b79_4f74_809e_c1c274ed4063.slice/crio-aa9a0a1ce04cf5e776943ebe4b4ffc887cc7f0d7111f44e2e55f143d9edbcb9b WatchSource:0}: Error finding container aa9a0a1ce04cf5e776943ebe4b4ffc887cc7f0d7111f44e2e55f143d9edbcb9b: Status 404 returned error can't find the container with id aa9a0a1ce04cf5e776943ebe4b4ffc887cc7f0d7111f44e2e55f143d9edbcb9b Jan 29 16:28:30 crc kubenswrapper[4714]: W0129 16:28:30.254654 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0390b29_ac12_4c76_a954_8c7236d81661.slice/crio-a7e3257c239df083499879bab78d2e91169e0bd8d92a8c5fac288924f3619908 WatchSource:0}: Error finding container a7e3257c239df083499879bab78d2e91169e0bd8d92a8c5fac288924f3619908: Status 404 returned error can't find the container with id a7e3257c239df083499879bab78d2e91169e0bd8d92a8c5fac288924f3619908 Jan 29 16:28:30 crc kubenswrapper[4714]: I0129 16:28:30.255186 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 29 16:28:30 crc kubenswrapper[4714]: W0129 16:28:30.262403 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode01c83c7_65ba_4f1b_9d17_ba5a824216bb.slice/crio-198a3df4efafc7e3421f9f093f38267c1562e37ed9290df232bcf8b82972d9a2 WatchSource:0}: Error finding container 198a3df4efafc7e3421f9f093f38267c1562e37ed9290df232bcf8b82972d9a2: Status 404 returned error can't find the container with id 198a3df4efafc7e3421f9f093f38267c1562e37ed9290df232bcf8b82972d9a2 Jan 29 16:28:30 crc kubenswrapper[4714]: I0129 16:28:30.262741 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 29 16:28:30 crc kubenswrapper[4714]: I0129 16:28:30.500032 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 29 16:28:30 crc kubenswrapper[4714]: W0129 16:28:30.506857 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b21fd86_5b9b_4b13_82aa_eb3d7f1fafb3.slice/crio-4d4f411f7f92df4f5d547c75bfc715baa406061177023b383b53537927256cc7 WatchSource:0}: Error finding container 4d4f411f7f92df4f5d547c75bfc715baa406061177023b383b53537927256cc7: Status 404 returned error can't find the container with id 4d4f411f7f92df4f5d547c75bfc715baa406061177023b383b53537927256cc7 Jan 29 16:28:31 crc kubenswrapper[4714]: I0129 16:28:31.148398 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"c0390b29-ac12-4c76-a954-8c7236d81661","Type":"ContainerStarted","Data":"a7e3257c239df083499879bab78d2e91169e0bd8d92a8c5fac288924f3619908"} Jan 29 16:28:31 crc kubenswrapper[4714]: I0129 16:28:31.150244 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3","Type":"ContainerStarted","Data":"79461c5a69e4029a68e7e8faa95d70a67ee30b1d8f5d9b8793dd7214e3da2c42"} Jan 29 16:28:31 crc kubenswrapper[4714]: I0129 16:28:31.150299 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3","Type":"ContainerStarted","Data":"4d4f411f7f92df4f5d547c75bfc715baa406061177023b383b53537927256cc7"} Jan 29 16:28:31 crc kubenswrapper[4714]: I0129 16:28:31.151489 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"2e154d80-4b79-4f74-809e-c1c274ed4063","Type":"ContainerStarted","Data":"aa9a0a1ce04cf5e776943ebe4b4ffc887cc7f0d7111f44e2e55f143d9edbcb9b"} Jan 29 16:28:31 crc kubenswrapper[4714]: I0129 16:28:31.152439 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"e01c83c7-65ba-4f1b-9d17-ba5a824216bb","Type":"ContainerStarted","Data":"198a3df4efafc7e3421f9f093f38267c1562e37ed9290df232bcf8b82972d9a2"} Jan 29 16:28:32 crc kubenswrapper[4714]: I0129 16:28:32.164561 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"c0390b29-ac12-4c76-a954-8c7236d81661","Type":"ContainerStarted","Data":"8b1dd6490cae9482eb69b08b1702b33a571bfe8525bbdf572a2f8f5feb7f4e29"} Jan 29 16:28:32 crc kubenswrapper[4714]: I0129 16:28:32.167708 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3","Type":"ContainerStarted","Data":"e1b4f5a218866bd462e639ef9e2c44453f1907feb94c13e36f29e0c48f866074"} Jan 29 16:28:32 crc kubenswrapper[4714]: I0129 16:28:32.167978 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:28:32 crc kubenswrapper[4714]: I0129 16:28:32.172208 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"2e154d80-4b79-4f74-809e-c1c274ed4063","Type":"ContainerStarted","Data":"fa59fe5c6301744749846298c687355a593a6ea20971c6048c481907a9c337d4"} Jan 29 16:28:32 crc kubenswrapper[4714]: I0129 16:28:32.172277 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"2e154d80-4b79-4f74-809e-c1c274ed4063","Type":"ContainerStarted","Data":"20125722bd9150df390f3154c0604d05230bfe6813ac735f8e571c5a8d6f0b17"} Jan 29 16:28:32 crc kubenswrapper[4714]: I0129 16:28:32.176250 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"e01c83c7-65ba-4f1b-9d17-ba5a824216bb","Type":"ContainerStarted","Data":"e1bfab0c04a18d8e21e126e8983546eaa8cd1254da1312d48182dae10e5944bc"} Jan 29 16:28:32 crc kubenswrapper[4714]: I0129 16:28:32.176298 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"e01c83c7-65ba-4f1b-9d17-ba5a824216bb","Type":"ContainerStarted","Data":"f457144c29e9469b286400011d4bf2e3e4f7a3f73d20a9a7750c293dc9d6911b"} Jan 29 16:28:32 crc kubenswrapper[4714]: I0129 16:28:32.197635 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-api-0" podStartSLOduration=3.197615522 podStartE2EDuration="3.197615522s" podCreationTimestamp="2026-01-29 16:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:28:32.189818478 +0000 UTC m=+1118.710319608" watchObservedRunningTime="2026-01-29 16:28:32.197615522 +0000 UTC m=+1118.718116652" Jan 29 16:28:32 crc kubenswrapper[4714]: I0129 16:28:32.223210 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-backup-0" podStartSLOduration=2.378593251 podStartE2EDuration="3.223193821s" podCreationTimestamp="2026-01-29 16:28:29 +0000 UTC" firstStartedPulling="2026-01-29 16:28:30.265461075 +0000 UTC m=+1116.785962195" lastFinishedPulling="2026-01-29 16:28:31.110061655 +0000 UTC m=+1117.630562765" observedRunningTime="2026-01-29 16:28:32.21971189 +0000 UTC m=+1118.740213010" watchObservedRunningTime="2026-01-29 16:28:32.223193821 +0000 UTC m=+1118.743694941" Jan 29 16:28:32 crc kubenswrapper[4714]: I0129 16:28:32.257096 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podStartSLOduration=2.23028182 podStartE2EDuration="3.257070297s" podCreationTimestamp="2026-01-29 16:28:29 +0000 UTC" firstStartedPulling="2026-01-29 16:28:30.161873254 +0000 UTC m=+1116.682374374" lastFinishedPulling="2026-01-29 16:28:31.188661731 +0000 UTC m=+1117.709162851" observedRunningTime="2026-01-29 16:28:32.248598966 +0000 UTC m=+1118.769100106" watchObservedRunningTime="2026-01-29 16:28:32.257070297 +0000 UTC m=+1118.777571417" Jan 29 16:28:33 crc kubenswrapper[4714]: I0129 16:28:33.185693 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"c0390b29-ac12-4c76-a954-8c7236d81661","Type":"ContainerStarted","Data":"a3a659f2d53d2c5e83b9c4601491f3c732c7a7d0fe867de849d8aab7c8e4a69c"} Jan 29 16:28:33 crc kubenswrapper[4714]: I0129 16:28:33.204131 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-scheduler-0" podStartSLOduration=3.3571337740000002 podStartE2EDuration="4.204111326s" podCreationTimestamp="2026-01-29 16:28:29 +0000 UTC" firstStartedPulling="2026-01-29 16:28:30.259593951 +0000 UTC m=+1116.780095061" lastFinishedPulling="2026-01-29 16:28:31.106571493 +0000 UTC m=+1117.627072613" observedRunningTime="2026-01-29 16:28:33.202657638 +0000 UTC m=+1119.723158758" watchObservedRunningTime="2026-01-29 16:28:33.204111326 +0000 UTC m=+1119.724612446" Jan 29 16:28:34 crc kubenswrapper[4714]: I0129 16:28:34.196057 4714 generic.go:334] "Generic (PLEG): container finished" podID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerID="fa59fe5c6301744749846298c687355a593a6ea20971c6048c481907a9c337d4" exitCode=1 Jan 29 16:28:34 crc kubenswrapper[4714]: I0129 16:28:34.196142 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"2e154d80-4b79-4f74-809e-c1c274ed4063","Type":"ContainerDied","Data":"fa59fe5c6301744749846298c687355a593a6ea20971c6048c481907a9c337d4"} Jan 29 16:28:34 crc kubenswrapper[4714]: I0129 16:28:34.197245 4714 scope.go:117] "RemoveContainer" containerID="fa59fe5c6301744749846298c687355a593a6ea20971c6048c481907a9c337d4" Jan 29 16:28:34 crc kubenswrapper[4714]: I0129 16:28:34.758250 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:28:34 crc kubenswrapper[4714]: I0129 16:28:34.769694 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:34 crc kubenswrapper[4714]: I0129 16:28:34.887542 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:35 crc kubenswrapper[4714]: I0129 16:28:35.204167 4714 generic.go:334] "Generic (PLEG): container finished" podID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerID="20125722bd9150df390f3154c0604d05230bfe6813ac735f8e571c5a8d6f0b17" exitCode=1 Jan 29 16:28:35 crc kubenswrapper[4714]: I0129 16:28:35.205077 4714 scope.go:117] "RemoveContainer" containerID="20125722bd9150df390f3154c0604d05230bfe6813ac735f8e571c5a8d6f0b17" Jan 29 16:28:35 crc kubenswrapper[4714]: I0129 16:28:35.205286 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"2e154d80-4b79-4f74-809e-c1c274ed4063","Type":"ContainerDied","Data":"20125722bd9150df390f3154c0604d05230bfe6813ac735f8e571c5a8d6f0b17"} Jan 29 16:28:35 crc kubenswrapper[4714]: I0129 16:28:35.205316 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"2e154d80-4b79-4f74-809e-c1c274ed4063","Type":"ContainerStarted","Data":"34f2b3f7b5334da41b022ccd7abc5bd0f409e91fd9b5a369d64fab6c7fb44523"} Jan 29 16:28:35 crc kubenswrapper[4714]: I0129 16:28:35.888139 4714 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:36 crc kubenswrapper[4714]: I0129 16:28:36.214741 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"2e154d80-4b79-4f74-809e-c1c274ed4063","Type":"ContainerStarted","Data":"46a4b26eb4dfed202b650ea94571e607a9817d63a8156ea95e8371d72e104ee7"} Jan 29 16:28:37 crc kubenswrapper[4714]: I0129 16:28:37.226095 4714 generic.go:334] "Generic (PLEG): container finished" podID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerID="34f2b3f7b5334da41b022ccd7abc5bd0f409e91fd9b5a369d64fab6c7fb44523" exitCode=1 Jan 29 16:28:37 crc kubenswrapper[4714]: I0129 16:28:37.226170 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"2e154d80-4b79-4f74-809e-c1c274ed4063","Type":"ContainerDied","Data":"34f2b3f7b5334da41b022ccd7abc5bd0f409e91fd9b5a369d64fab6c7fb44523"} Jan 29 16:28:37 crc kubenswrapper[4714]: I0129 16:28:37.226499 4714 scope.go:117] "RemoveContainer" containerID="fa59fe5c6301744749846298c687355a593a6ea20971c6048c481907a9c337d4" Jan 29 16:28:37 crc kubenswrapper[4714]: I0129 16:28:37.227106 4714 scope.go:117] "RemoveContainer" containerID="34f2b3f7b5334da41b022ccd7abc5bd0f409e91fd9b5a369d64fab6c7fb44523" Jan 29 16:28:37 crc kubenswrapper[4714]: E0129 16:28:37.227421 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(2e154d80-4b79-4f74-809e-c1c274ed4063)\"" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" Jan 29 16:28:38 crc kubenswrapper[4714]: I0129 16:28:38.235572 4714 generic.go:334] "Generic (PLEG): container finished" podID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerID="46a4b26eb4dfed202b650ea94571e607a9817d63a8156ea95e8371d72e104ee7" exitCode=1 Jan 29 16:28:38 crc kubenswrapper[4714]: I0129 16:28:38.235615 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"2e154d80-4b79-4f74-809e-c1c274ed4063","Type":"ContainerDied","Data":"46a4b26eb4dfed202b650ea94571e607a9817d63a8156ea95e8371d72e104ee7"} Jan 29 16:28:38 crc kubenswrapper[4714]: I0129 16:28:38.235673 4714 scope.go:117] "RemoveContainer" containerID="20125722bd9150df390f3154c0604d05230bfe6813ac735f8e571c5a8d6f0b17" Jan 29 16:28:38 crc kubenswrapper[4714]: I0129 16:28:38.236225 4714 scope.go:117] "RemoveContainer" containerID="46a4b26eb4dfed202b650ea94571e607a9817d63a8156ea95e8371d72e104ee7" Jan 29 16:28:38 crc kubenswrapper[4714]: I0129 16:28:38.236265 4714 scope.go:117] "RemoveContainer" containerID="34f2b3f7b5334da41b022ccd7abc5bd0f409e91fd9b5a369d64fab6c7fb44523" Jan 29 16:28:38 crc kubenswrapper[4714]: E0129 16:28:38.236463 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(2e154d80-4b79-4f74-809e-c1c274ed4063)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(2e154d80-4b79-4f74-809e-c1c274ed4063)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" Jan 29 16:28:38 crc kubenswrapper[4714]: I0129 16:28:38.887514 4714 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:39 crc kubenswrapper[4714]: I0129 16:28:39.246695 4714 scope.go:117] "RemoveContainer" containerID="46a4b26eb4dfed202b650ea94571e607a9817d63a8156ea95e8371d72e104ee7" Jan 29 16:28:39 crc kubenswrapper[4714]: I0129 16:28:39.247284 4714 scope.go:117] "RemoveContainer" containerID="34f2b3f7b5334da41b022ccd7abc5bd0f409e91fd9b5a369d64fab6c7fb44523" Jan 29 16:28:39 crc kubenswrapper[4714]: E0129 16:28:39.247566 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(2e154d80-4b79-4f74-809e-c1c274ed4063)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(2e154d80-4b79-4f74-809e-c1c274ed4063)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" Jan 29 16:28:39 crc kubenswrapper[4714]: I0129 16:28:39.888030 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:39 crc kubenswrapper[4714]: I0129 16:28:39.888385 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:39 crc kubenswrapper[4714]: I0129 16:28:39.997319 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:28:40 crc kubenswrapper[4714]: I0129 16:28:40.009390 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:28:40 crc kubenswrapper[4714]: I0129 16:28:40.256047 4714 scope.go:117] "RemoveContainer" containerID="46a4b26eb4dfed202b650ea94571e607a9817d63a8156ea95e8371d72e104ee7" Jan 29 16:28:40 crc kubenswrapper[4714]: I0129 16:28:40.256079 4714 scope.go:117] "RemoveContainer" containerID="34f2b3f7b5334da41b022ccd7abc5bd0f409e91fd9b5a369d64fab6c7fb44523" Jan 29 16:28:40 crc kubenswrapper[4714]: E0129 16:28:40.256392 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(2e154d80-4b79-4f74-809e-c1c274ed4063)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(2e154d80-4b79-4f74-809e-c1c274ed4063)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" Jan 29 16:28:41 crc kubenswrapper[4714]: I0129 16:28:41.263118 4714 scope.go:117] "RemoveContainer" containerID="46a4b26eb4dfed202b650ea94571e607a9817d63a8156ea95e8371d72e104ee7" Jan 29 16:28:41 crc kubenswrapper[4714]: I0129 16:28:41.263148 4714 scope.go:117] "RemoveContainer" containerID="34f2b3f7b5334da41b022ccd7abc5bd0f409e91fd9b5a369d64fab6c7fb44523" Jan 29 16:28:41 crc kubenswrapper[4714]: E0129 16:28:41.263375 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(2e154d80-4b79-4f74-809e-c1c274ed4063)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(2e154d80-4b79-4f74-809e-c1c274ed4063)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" Jan 29 16:28:42 crc kubenswrapper[4714]: I0129 16:28:42.152547 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:28:42 crc kubenswrapper[4714]: E0129 16:28:42.196501 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:28:43 crc kubenswrapper[4714]: I0129 16:28:43.901571 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-1"] Jan 29 16:28:43 crc kubenswrapper[4714]: I0129 16:28:43.902882 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 29 16:28:43 crc kubenswrapper[4714]: I0129 16:28:43.910603 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-1"] Jan 29 16:28:44 crc kubenswrapper[4714]: I0129 16:28:44.045510 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8htn\" (UniqueName: \"kubernetes.io/projected/73e3160b-2922-47cf-999d-ad759cae98bc-kube-api-access-q8htn\") pod \"cinder-scheduler-1\" (UID: \"73e3160b-2922-47cf-999d-ad759cae98bc\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 29 16:28:44 crc kubenswrapper[4714]: I0129 16:28:44.045577 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73e3160b-2922-47cf-999d-ad759cae98bc-config-data\") pod \"cinder-scheduler-1\" (UID: \"73e3160b-2922-47cf-999d-ad759cae98bc\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 29 16:28:44 crc kubenswrapper[4714]: I0129 16:28:44.045697 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73e3160b-2922-47cf-999d-ad759cae98bc-config-data-custom\") pod \"cinder-scheduler-1\" (UID: \"73e3160b-2922-47cf-999d-ad759cae98bc\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 29 16:28:44 crc kubenswrapper[4714]: I0129 16:28:44.045728 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73e3160b-2922-47cf-999d-ad759cae98bc-etc-machine-id\") pod \"cinder-scheduler-1\" (UID: \"73e3160b-2922-47cf-999d-ad759cae98bc\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 29 16:28:44 crc kubenswrapper[4714]: I0129 16:28:44.045769 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73e3160b-2922-47cf-999d-ad759cae98bc-scripts\") pod \"cinder-scheduler-1\" (UID: \"73e3160b-2922-47cf-999d-ad759cae98bc\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 29 16:28:44 crc kubenswrapper[4714]: I0129 16:28:44.147691 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73e3160b-2922-47cf-999d-ad759cae98bc-config-data-custom\") pod \"cinder-scheduler-1\" (UID: \"73e3160b-2922-47cf-999d-ad759cae98bc\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 29 16:28:44 crc kubenswrapper[4714]: I0129 16:28:44.147742 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73e3160b-2922-47cf-999d-ad759cae98bc-etc-machine-id\") pod \"cinder-scheduler-1\" (UID: \"73e3160b-2922-47cf-999d-ad759cae98bc\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 29 16:28:44 crc kubenswrapper[4714]: I0129 16:28:44.147787 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73e3160b-2922-47cf-999d-ad759cae98bc-scripts\") pod \"cinder-scheduler-1\" (UID: \"73e3160b-2922-47cf-999d-ad759cae98bc\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 29 16:28:44 crc kubenswrapper[4714]: I0129 16:28:44.147845 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8htn\" (UniqueName: \"kubernetes.io/projected/73e3160b-2922-47cf-999d-ad759cae98bc-kube-api-access-q8htn\") pod \"cinder-scheduler-1\" (UID: \"73e3160b-2922-47cf-999d-ad759cae98bc\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 29 16:28:44 crc kubenswrapper[4714]: I0129 16:28:44.147883 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73e3160b-2922-47cf-999d-ad759cae98bc-config-data\") pod \"cinder-scheduler-1\" (UID: \"73e3160b-2922-47cf-999d-ad759cae98bc\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 29 16:28:44 crc kubenswrapper[4714]: I0129 16:28:44.147916 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73e3160b-2922-47cf-999d-ad759cae98bc-etc-machine-id\") pod \"cinder-scheduler-1\" (UID: \"73e3160b-2922-47cf-999d-ad759cae98bc\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 29 16:28:44 crc kubenswrapper[4714]: I0129 16:28:44.158453 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73e3160b-2922-47cf-999d-ad759cae98bc-scripts\") pod \"cinder-scheduler-1\" (UID: \"73e3160b-2922-47cf-999d-ad759cae98bc\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 29 16:28:44 crc kubenswrapper[4714]: I0129 16:28:44.161004 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73e3160b-2922-47cf-999d-ad759cae98bc-config-data\") pod \"cinder-scheduler-1\" (UID: \"73e3160b-2922-47cf-999d-ad759cae98bc\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 29 16:28:44 crc kubenswrapper[4714]: I0129 16:28:44.163506 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73e3160b-2922-47cf-999d-ad759cae98bc-config-data-custom\") pod \"cinder-scheduler-1\" (UID: \"73e3160b-2922-47cf-999d-ad759cae98bc\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 29 16:28:44 crc kubenswrapper[4714]: I0129 16:28:44.168153 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8htn\" (UniqueName: \"kubernetes.io/projected/73e3160b-2922-47cf-999d-ad759cae98bc-kube-api-access-q8htn\") pod \"cinder-scheduler-1\" (UID: \"73e3160b-2922-47cf-999d-ad759cae98bc\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 29 16:28:44 crc kubenswrapper[4714]: I0129 16:28:44.255605 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 29 16:28:44 crc kubenswrapper[4714]: I0129 16:28:44.678644 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-1"] Jan 29 16:28:44 crc kubenswrapper[4714]: W0129 16:28:44.678920 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73e3160b_2922_47cf_999d_ad759cae98bc.slice/crio-e286b3bb2b844ed93f167bb8d414988c54c125f6caa42afdb169333e416d9923 WatchSource:0}: Error finding container e286b3bb2b844ed93f167bb8d414988c54c125f6caa42afdb169333e416d9923: Status 404 returned error can't find the container with id e286b3bb2b844ed93f167bb8d414988c54c125f6caa42afdb169333e416d9923 Jan 29 16:28:45 crc kubenswrapper[4714]: I0129 16:28:45.303528 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-1" event={"ID":"73e3160b-2922-47cf-999d-ad759cae98bc","Type":"ContainerStarted","Data":"b78224b7eaf29fa5f35f5a14a7913fb0a21443ce839155a632681f4c3351fc14"} Jan 29 16:28:45 crc kubenswrapper[4714]: I0129 16:28:45.304953 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-1" event={"ID":"73e3160b-2922-47cf-999d-ad759cae98bc","Type":"ContainerStarted","Data":"e286b3bb2b844ed93f167bb8d414988c54c125f6caa42afdb169333e416d9923"} Jan 29 16:28:46 crc kubenswrapper[4714]: I0129 16:28:46.325483 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-1" event={"ID":"73e3160b-2922-47cf-999d-ad759cae98bc","Type":"ContainerStarted","Data":"887a921e7b634430b2e58452fdae19c42793bfa2ce2b387e0375f8cbedc76627"} Jan 29 16:28:49 crc kubenswrapper[4714]: I0129 16:28:49.256725 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 29 16:28:52 crc kubenswrapper[4714]: I0129 16:28:52.184372 4714 scope.go:117] "RemoveContainer" containerID="46a4b26eb4dfed202b650ea94571e607a9817d63a8156ea95e8371d72e104ee7" Jan 29 16:28:52 crc kubenswrapper[4714]: I0129 16:28:52.184681 4714 scope.go:117] "RemoveContainer" containerID="34f2b3f7b5334da41b022ccd7abc5bd0f409e91fd9b5a369d64fab6c7fb44523" Jan 29 16:28:53 crc kubenswrapper[4714]: I0129 16:28:53.394561 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"2e154d80-4b79-4f74-809e-c1c274ed4063","Type":"ContainerStarted","Data":"cdf72985bc60f6ce06d587a8a7eb4e0fd4beed9399bf382db190b6ec4763fc7c"} Jan 29 16:28:53 crc kubenswrapper[4714]: I0129 16:28:53.395070 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"2e154d80-4b79-4f74-809e-c1c274ed4063","Type":"ContainerStarted","Data":"f087542636afa3fd11613ddd38d7bd61a4a8cac1161d258a9bca9a6772f2da3c"} Jan 29 16:28:53 crc kubenswrapper[4714]: I0129 16:28:53.427635 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-scheduler-1" podStartSLOduration=10.427620308 podStartE2EDuration="10.427620308s" podCreationTimestamp="2026-01-29 16:28:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:28:46.35044233 +0000 UTC m=+1132.870943450" watchObservedRunningTime="2026-01-29 16:28:53.427620308 +0000 UTC m=+1139.948121428" Jan 29 16:28:54 crc kubenswrapper[4714]: E0129 16:28:54.330850 4714 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:28:54 crc kubenswrapper[4714]: E0129 16:28:54.331595 4714 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tmm88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wdwq5_openshift-marketplace(8c12ad14-f878-42a1-a168-bad4026ec2dd): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:28:54 crc kubenswrapper[4714]: E0129 16:28:54.333021 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:28:54 crc kubenswrapper[4714]: I0129 16:28:54.406624 4714 generic.go:334] "Generic (PLEG): container finished" podID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerID="cdf72985bc60f6ce06d587a8a7eb4e0fd4beed9399bf382db190b6ec4763fc7c" exitCode=1 Jan 29 16:28:54 crc kubenswrapper[4714]: I0129 16:28:54.406665 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"2e154d80-4b79-4f74-809e-c1c274ed4063","Type":"ContainerDied","Data":"cdf72985bc60f6ce06d587a8a7eb4e0fd4beed9399bf382db190b6ec4763fc7c"} Jan 29 16:28:54 crc kubenswrapper[4714]: I0129 16:28:54.406699 4714 scope.go:117] "RemoveContainer" containerID="34f2b3f7b5334da41b022ccd7abc5bd0f409e91fd9b5a369d64fab6c7fb44523" Jan 29 16:28:54 crc kubenswrapper[4714]: I0129 16:28:54.407385 4714 scope.go:117] "RemoveContainer" containerID="cdf72985bc60f6ce06d587a8a7eb4e0fd4beed9399bf382db190b6ec4763fc7c" Jan 29 16:28:54 crc kubenswrapper[4714]: E0129 16:28:54.407723 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 20s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(2e154d80-4b79-4f74-809e-c1c274ed4063)\"" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" Jan 29 16:28:54 crc kubenswrapper[4714]: I0129 16:28:54.444594 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 29 16:28:54 crc kubenswrapper[4714]: I0129 16:28:54.496351 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-2"] Jan 29 16:28:54 crc kubenswrapper[4714]: I0129 16:28:54.497460 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 29 16:28:54 crc kubenswrapper[4714]: I0129 16:28:54.514705 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-2"] Jan 29 16:28:54 crc kubenswrapper[4714]: I0129 16:28:54.620674 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67fa6a5a-7bfb-4079-8658-aca62c22bc73-scripts\") pod \"cinder-scheduler-2\" (UID: \"67fa6a5a-7bfb-4079-8658-aca62c22bc73\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 29 16:28:54 crc kubenswrapper[4714]: I0129 16:28:54.620718 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67fa6a5a-7bfb-4079-8658-aca62c22bc73-config-data-custom\") pod \"cinder-scheduler-2\" (UID: \"67fa6a5a-7bfb-4079-8658-aca62c22bc73\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 29 16:28:54 crc kubenswrapper[4714]: I0129 16:28:54.620738 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mf9c\" (UniqueName: \"kubernetes.io/projected/67fa6a5a-7bfb-4079-8658-aca62c22bc73-kube-api-access-2mf9c\") pod \"cinder-scheduler-2\" (UID: \"67fa6a5a-7bfb-4079-8658-aca62c22bc73\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 29 16:28:54 crc kubenswrapper[4714]: I0129 16:28:54.620762 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67fa6a5a-7bfb-4079-8658-aca62c22bc73-config-data\") pod \"cinder-scheduler-2\" (UID: \"67fa6a5a-7bfb-4079-8658-aca62c22bc73\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 29 16:28:54 crc kubenswrapper[4714]: I0129 16:28:54.620800 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67fa6a5a-7bfb-4079-8658-aca62c22bc73-etc-machine-id\") pod \"cinder-scheduler-2\" (UID: \"67fa6a5a-7bfb-4079-8658-aca62c22bc73\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 29 16:28:54 crc kubenswrapper[4714]: I0129 16:28:54.723068 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67fa6a5a-7bfb-4079-8658-aca62c22bc73-scripts\") pod \"cinder-scheduler-2\" (UID: \"67fa6a5a-7bfb-4079-8658-aca62c22bc73\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 29 16:28:54 crc kubenswrapper[4714]: I0129 16:28:54.723142 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67fa6a5a-7bfb-4079-8658-aca62c22bc73-config-data-custom\") pod \"cinder-scheduler-2\" (UID: \"67fa6a5a-7bfb-4079-8658-aca62c22bc73\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 29 16:28:54 crc kubenswrapper[4714]: I0129 16:28:54.723175 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mf9c\" (UniqueName: \"kubernetes.io/projected/67fa6a5a-7bfb-4079-8658-aca62c22bc73-kube-api-access-2mf9c\") pod \"cinder-scheduler-2\" (UID: \"67fa6a5a-7bfb-4079-8658-aca62c22bc73\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 29 16:28:54 crc kubenswrapper[4714]: I0129 16:28:54.723220 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67fa6a5a-7bfb-4079-8658-aca62c22bc73-config-data\") pod \"cinder-scheduler-2\" (UID: \"67fa6a5a-7bfb-4079-8658-aca62c22bc73\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 29 16:28:54 crc kubenswrapper[4714]: I0129 16:28:54.723258 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67fa6a5a-7bfb-4079-8658-aca62c22bc73-etc-machine-id\") pod \"cinder-scheduler-2\" (UID: \"67fa6a5a-7bfb-4079-8658-aca62c22bc73\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 29 16:28:54 crc kubenswrapper[4714]: I0129 16:28:54.723416 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67fa6a5a-7bfb-4079-8658-aca62c22bc73-etc-machine-id\") pod \"cinder-scheduler-2\" (UID: \"67fa6a5a-7bfb-4079-8658-aca62c22bc73\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 29 16:28:54 crc kubenswrapper[4714]: I0129 16:28:54.731364 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67fa6a5a-7bfb-4079-8658-aca62c22bc73-config-data-custom\") pod \"cinder-scheduler-2\" (UID: \"67fa6a5a-7bfb-4079-8658-aca62c22bc73\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 29 16:28:54 crc kubenswrapper[4714]: I0129 16:28:54.733288 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67fa6a5a-7bfb-4079-8658-aca62c22bc73-config-data\") pod \"cinder-scheduler-2\" (UID: \"67fa6a5a-7bfb-4079-8658-aca62c22bc73\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 29 16:28:54 crc kubenswrapper[4714]: I0129 16:28:54.735270 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67fa6a5a-7bfb-4079-8658-aca62c22bc73-scripts\") pod \"cinder-scheduler-2\" (UID: \"67fa6a5a-7bfb-4079-8658-aca62c22bc73\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 29 16:28:54 crc kubenswrapper[4714]: I0129 16:28:54.749858 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mf9c\" (UniqueName: \"kubernetes.io/projected/67fa6a5a-7bfb-4079-8658-aca62c22bc73-kube-api-access-2mf9c\") pod \"cinder-scheduler-2\" (UID: \"67fa6a5a-7bfb-4079-8658-aca62c22bc73\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 29 16:28:54 crc kubenswrapper[4714]: I0129 16:28:54.815686 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 29 16:28:54 crc kubenswrapper[4714]: I0129 16:28:54.887857 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:55 crc kubenswrapper[4714]: I0129 16:28:55.275083 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-2"] Jan 29 16:28:55 crc kubenswrapper[4714]: I0129 16:28:55.421671 4714 scope.go:117] "RemoveContainer" containerID="cdf72985bc60f6ce06d587a8a7eb4e0fd4beed9399bf382db190b6ec4763fc7c" Jan 29 16:28:55 crc kubenswrapper[4714]: E0129 16:28:55.422090 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 20s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(2e154d80-4b79-4f74-809e-c1c274ed4063)\"" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" Jan 29 16:28:55 crc kubenswrapper[4714]: I0129 16:28:55.427113 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-2" event={"ID":"67fa6a5a-7bfb-4079-8658-aca62c22bc73","Type":"ContainerStarted","Data":"41540f6e31a993ff1d30e0d04c9d92278e10fe8c05007eecaef75b03ade05470"} Jan 29 16:28:56 crc kubenswrapper[4714]: I0129 16:28:56.439553 4714 generic.go:334] "Generic (PLEG): container finished" podID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerID="f087542636afa3fd11613ddd38d7bd61a4a8cac1161d258a9bca9a6772f2da3c" exitCode=1 Jan 29 16:28:56 crc kubenswrapper[4714]: I0129 16:28:56.439676 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"2e154d80-4b79-4f74-809e-c1c274ed4063","Type":"ContainerDied","Data":"f087542636afa3fd11613ddd38d7bd61a4a8cac1161d258a9bca9a6772f2da3c"} Jan 29 16:28:56 crc kubenswrapper[4714]: I0129 16:28:56.439734 4714 scope.go:117] "RemoveContainer" containerID="46a4b26eb4dfed202b650ea94571e607a9817d63a8156ea95e8371d72e104ee7" Jan 29 16:28:56 crc kubenswrapper[4714]: I0129 16:28:56.441995 4714 scope.go:117] "RemoveContainer" containerID="f087542636afa3fd11613ddd38d7bd61a4a8cac1161d258a9bca9a6772f2da3c" Jan 29 16:28:56 crc kubenswrapper[4714]: I0129 16:28:56.442427 4714 scope.go:117] "RemoveContainer" containerID="cdf72985bc60f6ce06d587a8a7eb4e0fd4beed9399bf382db190b6ec4763fc7c" Jan 29 16:28:56 crc kubenswrapper[4714]: E0129 16:28:56.443194 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(2e154d80-4b79-4f74-809e-c1c274ed4063)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 20s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(2e154d80-4b79-4f74-809e-c1c274ed4063)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" Jan 29 16:28:56 crc kubenswrapper[4714]: I0129 16:28:56.455009 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-2" event={"ID":"67fa6a5a-7bfb-4079-8658-aca62c22bc73","Type":"ContainerStarted","Data":"a901ff5206c34331366bd833f8b4176744c61709983ca47575647f7a5aef1760"} Jan 29 16:28:56 crc kubenswrapper[4714]: I0129 16:28:56.455063 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-2" event={"ID":"67fa6a5a-7bfb-4079-8658-aca62c22bc73","Type":"ContainerStarted","Data":"a781996ba3c27232811a12f89e03f81db23d935d200f9ce52272f5303e895f04"} Jan 29 16:28:56 crc kubenswrapper[4714]: I0129 16:28:56.510128 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-scheduler-2" podStartSLOduration=2.510108933 podStartE2EDuration="2.510108933s" podCreationTimestamp="2026-01-29 16:28:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:28:56.504422725 +0000 UTC m=+1143.024923855" watchObservedRunningTime="2026-01-29 16:28:56.510108933 +0000 UTC m=+1143.030610043" Jan 29 16:28:56 crc kubenswrapper[4714]: I0129 16:28:56.887347 4714 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:57 crc kubenswrapper[4714]: I0129 16:28:57.478588 4714 scope.go:117] "RemoveContainer" containerID="f087542636afa3fd11613ddd38d7bd61a4a8cac1161d258a9bca9a6772f2da3c" Jan 29 16:28:57 crc kubenswrapper[4714]: I0129 16:28:57.478638 4714 scope.go:117] "RemoveContainer" containerID="cdf72985bc60f6ce06d587a8a7eb4e0fd4beed9399bf382db190b6ec4763fc7c" Jan 29 16:28:57 crc kubenswrapper[4714]: E0129 16:28:57.479263 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(2e154d80-4b79-4f74-809e-c1c274ed4063)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 20s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(2e154d80-4b79-4f74-809e-c1c274ed4063)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" Jan 29 16:28:59 crc kubenswrapper[4714]: I0129 16:28:59.816128 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 29 16:28:59 crc kubenswrapper[4714]: I0129 16:28:59.887913 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:28:59 crc kubenswrapper[4714]: I0129 16:28:59.888640 4714 scope.go:117] "RemoveContainer" containerID="f087542636afa3fd11613ddd38d7bd61a4a8cac1161d258a9bca9a6772f2da3c" Jan 29 16:28:59 crc kubenswrapper[4714]: I0129 16:28:59.888657 4714 scope.go:117] "RemoveContainer" containerID="cdf72985bc60f6ce06d587a8a7eb4e0fd4beed9399bf382db190b6ec4763fc7c" Jan 29 16:28:59 crc kubenswrapper[4714]: E0129 16:28:59.888856 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(2e154d80-4b79-4f74-809e-c1c274ed4063)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 20s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(2e154d80-4b79-4f74-809e-c1c274ed4063)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" Jan 29 16:29:05 crc kubenswrapper[4714]: I0129 16:29:05.060071 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 29 16:29:05 crc kubenswrapper[4714]: I0129 16:29:05.595254 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-2"] Jan 29 16:29:05 crc kubenswrapper[4714]: I0129 16:29:05.595499 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-2" podUID="67fa6a5a-7bfb-4079-8658-aca62c22bc73" containerName="cinder-scheduler" containerID="cri-o://a781996ba3c27232811a12f89e03f81db23d935d200f9ce52272f5303e895f04" gracePeriod=30 Jan 29 16:29:05 crc kubenswrapper[4714]: I0129 16:29:05.595735 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-2" podUID="67fa6a5a-7bfb-4079-8658-aca62c22bc73" containerName="probe" containerID="cri-o://a901ff5206c34331366bd833f8b4176744c61709983ca47575647f7a5aef1760" gracePeriod=30 Jan 29 16:29:06 crc kubenswrapper[4714]: E0129 16:29:06.187039 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:29:06 crc kubenswrapper[4714]: I0129 16:29:06.540646 4714 generic.go:334] "Generic (PLEG): container finished" podID="67fa6a5a-7bfb-4079-8658-aca62c22bc73" containerID="a901ff5206c34331366bd833f8b4176744c61709983ca47575647f7a5aef1760" exitCode=0 Jan 29 16:29:06 crc kubenswrapper[4714]: I0129 16:29:06.540713 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-2" event={"ID":"67fa6a5a-7bfb-4079-8658-aca62c22bc73","Type":"ContainerDied","Data":"a901ff5206c34331366bd833f8b4176744c61709983ca47575647f7a5aef1760"} Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.028735 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.129504 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67fa6a5a-7bfb-4079-8658-aca62c22bc73-config-data-custom\") pod \"67fa6a5a-7bfb-4079-8658-aca62c22bc73\" (UID: \"67fa6a5a-7bfb-4079-8658-aca62c22bc73\") " Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.129629 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67fa6a5a-7bfb-4079-8658-aca62c22bc73-config-data\") pod \"67fa6a5a-7bfb-4079-8658-aca62c22bc73\" (UID: \"67fa6a5a-7bfb-4079-8658-aca62c22bc73\") " Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.129734 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mf9c\" (UniqueName: \"kubernetes.io/projected/67fa6a5a-7bfb-4079-8658-aca62c22bc73-kube-api-access-2mf9c\") pod \"67fa6a5a-7bfb-4079-8658-aca62c22bc73\" (UID: \"67fa6a5a-7bfb-4079-8658-aca62c22bc73\") " Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.129772 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67fa6a5a-7bfb-4079-8658-aca62c22bc73-etc-machine-id\") pod \"67fa6a5a-7bfb-4079-8658-aca62c22bc73\" (UID: \"67fa6a5a-7bfb-4079-8658-aca62c22bc73\") " Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.129804 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67fa6a5a-7bfb-4079-8658-aca62c22bc73-scripts\") pod \"67fa6a5a-7bfb-4079-8658-aca62c22bc73\" (UID: \"67fa6a5a-7bfb-4079-8658-aca62c22bc73\") " Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.129966 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67fa6a5a-7bfb-4079-8658-aca62c22bc73-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "67fa6a5a-7bfb-4079-8658-aca62c22bc73" (UID: "67fa6a5a-7bfb-4079-8658-aca62c22bc73"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.130583 4714 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67fa6a5a-7bfb-4079-8658-aca62c22bc73-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.136151 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67fa6a5a-7bfb-4079-8658-aca62c22bc73-scripts" (OuterVolumeSpecName: "scripts") pod "67fa6a5a-7bfb-4079-8658-aca62c22bc73" (UID: "67fa6a5a-7bfb-4079-8658-aca62c22bc73"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.136433 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67fa6a5a-7bfb-4079-8658-aca62c22bc73-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "67fa6a5a-7bfb-4079-8658-aca62c22bc73" (UID: "67fa6a5a-7bfb-4079-8658-aca62c22bc73"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.147303 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67fa6a5a-7bfb-4079-8658-aca62c22bc73-kube-api-access-2mf9c" (OuterVolumeSpecName: "kube-api-access-2mf9c") pod "67fa6a5a-7bfb-4079-8658-aca62c22bc73" (UID: "67fa6a5a-7bfb-4079-8658-aca62c22bc73"). InnerVolumeSpecName "kube-api-access-2mf9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.229006 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67fa6a5a-7bfb-4079-8658-aca62c22bc73-config-data" (OuterVolumeSpecName: "config-data") pod "67fa6a5a-7bfb-4079-8658-aca62c22bc73" (UID: "67fa6a5a-7bfb-4079-8658-aca62c22bc73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.231191 4714 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67fa6a5a-7bfb-4079-8658-aca62c22bc73-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.231221 4714 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67fa6a5a-7bfb-4079-8658-aca62c22bc73-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.231233 4714 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67fa6a5a-7bfb-4079-8658-aca62c22bc73-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.231244 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mf9c\" (UniqueName: \"kubernetes.io/projected/67fa6a5a-7bfb-4079-8658-aca62c22bc73-kube-api-access-2mf9c\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.551298 4714 generic.go:334] "Generic (PLEG): container finished" podID="67fa6a5a-7bfb-4079-8658-aca62c22bc73" containerID="a781996ba3c27232811a12f89e03f81db23d935d200f9ce52272f5303e895f04" exitCode=0 Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.551348 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-2" event={"ID":"67fa6a5a-7bfb-4079-8658-aca62c22bc73","Type":"ContainerDied","Data":"a781996ba3c27232811a12f89e03f81db23d935d200f9ce52272f5303e895f04"} Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.551358 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.551379 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-2" event={"ID":"67fa6a5a-7bfb-4079-8658-aca62c22bc73","Type":"ContainerDied","Data":"41540f6e31a993ff1d30e0d04c9d92278e10fe8c05007eecaef75b03ade05470"} Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.551403 4714 scope.go:117] "RemoveContainer" containerID="a901ff5206c34331366bd833f8b4176744c61709983ca47575647f7a5aef1760" Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.578917 4714 scope.go:117] "RemoveContainer" containerID="a781996ba3c27232811a12f89e03f81db23d935d200f9ce52272f5303e895f04" Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.596153 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-2"] Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.612198 4714 scope.go:117] "RemoveContainer" containerID="a901ff5206c34331366bd833f8b4176744c61709983ca47575647f7a5aef1760" Jan 29 16:29:07 crc kubenswrapper[4714]: E0129 16:29:07.613422 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a901ff5206c34331366bd833f8b4176744c61709983ca47575647f7a5aef1760\": container with ID starting with a901ff5206c34331366bd833f8b4176744c61709983ca47575647f7a5aef1760 not found: ID does not exist" containerID="a901ff5206c34331366bd833f8b4176744c61709983ca47575647f7a5aef1760" Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.613510 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a901ff5206c34331366bd833f8b4176744c61709983ca47575647f7a5aef1760"} err="failed to get container status \"a901ff5206c34331366bd833f8b4176744c61709983ca47575647f7a5aef1760\": rpc error: code = NotFound desc = could not find container \"a901ff5206c34331366bd833f8b4176744c61709983ca47575647f7a5aef1760\": container with ID starting with a901ff5206c34331366bd833f8b4176744c61709983ca47575647f7a5aef1760 not found: ID does not exist" Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.613543 4714 scope.go:117] "RemoveContainer" containerID="a781996ba3c27232811a12f89e03f81db23d935d200f9ce52272f5303e895f04" Jan 29 16:29:07 crc kubenswrapper[4714]: E0129 16:29:07.613888 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a781996ba3c27232811a12f89e03f81db23d935d200f9ce52272f5303e895f04\": container with ID starting with a781996ba3c27232811a12f89e03f81db23d935d200f9ce52272f5303e895f04 not found: ID does not exist" containerID="a781996ba3c27232811a12f89e03f81db23d935d200f9ce52272f5303e895f04" Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.613924 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a781996ba3c27232811a12f89e03f81db23d935d200f9ce52272f5303e895f04"} err="failed to get container status \"a781996ba3c27232811a12f89e03f81db23d935d200f9ce52272f5303e895f04\": rpc error: code = NotFound desc = could not find container \"a781996ba3c27232811a12f89e03f81db23d935d200f9ce52272f5303e895f04\": container with ID starting with a781996ba3c27232811a12f89e03f81db23d935d200f9ce52272f5303e895f04 not found: ID does not exist" Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.626308 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-2"] Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.652545 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-1"] Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.652768 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-1" podUID="73e3160b-2922-47cf-999d-ad759cae98bc" containerName="cinder-scheduler" containerID="cri-o://b78224b7eaf29fa5f35f5a14a7913fb0a21443ce839155a632681f4c3351fc14" gracePeriod=30 Jan 29 16:29:07 crc kubenswrapper[4714]: I0129 16:29:07.653119 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-1" podUID="73e3160b-2922-47cf-999d-ad759cae98bc" containerName="probe" containerID="cri-o://887a921e7b634430b2e58452fdae19c42793bfa2ce2b387e0375f8cbedc76627" gracePeriod=30 Jan 29 16:29:08 crc kubenswrapper[4714]: I0129 16:29:08.199903 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67fa6a5a-7bfb-4079-8658-aca62c22bc73" path="/var/lib/kubelet/pods/67fa6a5a-7bfb-4079-8658-aca62c22bc73/volumes" Jan 29 16:29:08 crc kubenswrapper[4714]: I0129 16:29:08.561011 4714 generic.go:334] "Generic (PLEG): container finished" podID="73e3160b-2922-47cf-999d-ad759cae98bc" containerID="887a921e7b634430b2e58452fdae19c42793bfa2ce2b387e0375f8cbedc76627" exitCode=0 Jan 29 16:29:08 crc kubenswrapper[4714]: I0129 16:29:08.561101 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-1" event={"ID":"73e3160b-2922-47cf-999d-ad759cae98bc","Type":"ContainerDied","Data":"887a921e7b634430b2e58452fdae19c42793bfa2ce2b387e0375f8cbedc76627"} Jan 29 16:29:11 crc kubenswrapper[4714]: I0129 16:29:11.184362 4714 scope.go:117] "RemoveContainer" containerID="f087542636afa3fd11613ddd38d7bd61a4a8cac1161d258a9bca9a6772f2da3c" Jan 29 16:29:11 crc kubenswrapper[4714]: I0129 16:29:11.184668 4714 scope.go:117] "RemoveContainer" containerID="cdf72985bc60f6ce06d587a8a7eb4e0fd4beed9399bf382db190b6ec4763fc7c" Jan 29 16:29:11 crc kubenswrapper[4714]: E0129 16:29:11.185070 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(2e154d80-4b79-4f74-809e-c1c274ed4063)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 20s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(2e154d80-4b79-4f74-809e-c1c274ed4063)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.103316 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.210827 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73e3160b-2922-47cf-999d-ad759cae98bc-scripts\") pod \"73e3160b-2922-47cf-999d-ad759cae98bc\" (UID: \"73e3160b-2922-47cf-999d-ad759cae98bc\") " Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.210927 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73e3160b-2922-47cf-999d-ad759cae98bc-etc-machine-id\") pod \"73e3160b-2922-47cf-999d-ad759cae98bc\" (UID: \"73e3160b-2922-47cf-999d-ad759cae98bc\") " Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.211099 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8htn\" (UniqueName: \"kubernetes.io/projected/73e3160b-2922-47cf-999d-ad759cae98bc-kube-api-access-q8htn\") pod \"73e3160b-2922-47cf-999d-ad759cae98bc\" (UID: \"73e3160b-2922-47cf-999d-ad759cae98bc\") " Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.211139 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73e3160b-2922-47cf-999d-ad759cae98bc-config-data\") pod \"73e3160b-2922-47cf-999d-ad759cae98bc\" (UID: \"73e3160b-2922-47cf-999d-ad759cae98bc\") " Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.211162 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73e3160b-2922-47cf-999d-ad759cae98bc-config-data-custom\") pod \"73e3160b-2922-47cf-999d-ad759cae98bc\" (UID: \"73e3160b-2922-47cf-999d-ad759cae98bc\") " Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.211159 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73e3160b-2922-47cf-999d-ad759cae98bc-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "73e3160b-2922-47cf-999d-ad759cae98bc" (UID: "73e3160b-2922-47cf-999d-ad759cae98bc"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.211544 4714 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73e3160b-2922-47cf-999d-ad759cae98bc-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.217113 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73e3160b-2922-47cf-999d-ad759cae98bc-scripts" (OuterVolumeSpecName: "scripts") pod "73e3160b-2922-47cf-999d-ad759cae98bc" (UID: "73e3160b-2922-47cf-999d-ad759cae98bc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.219275 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73e3160b-2922-47cf-999d-ad759cae98bc-kube-api-access-q8htn" (OuterVolumeSpecName: "kube-api-access-q8htn") pod "73e3160b-2922-47cf-999d-ad759cae98bc" (UID: "73e3160b-2922-47cf-999d-ad759cae98bc"). InnerVolumeSpecName "kube-api-access-q8htn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.223148 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73e3160b-2922-47cf-999d-ad759cae98bc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "73e3160b-2922-47cf-999d-ad759cae98bc" (UID: "73e3160b-2922-47cf-999d-ad759cae98bc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.312639 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8htn\" (UniqueName: \"kubernetes.io/projected/73e3160b-2922-47cf-999d-ad759cae98bc-kube-api-access-q8htn\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.312666 4714 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73e3160b-2922-47cf-999d-ad759cae98bc-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.312676 4714 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73e3160b-2922-47cf-999d-ad759cae98bc-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.313591 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73e3160b-2922-47cf-999d-ad759cae98bc-config-data" (OuterVolumeSpecName: "config-data") pod "73e3160b-2922-47cf-999d-ad759cae98bc" (UID: "73e3160b-2922-47cf-999d-ad759cae98bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.414408 4714 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73e3160b-2922-47cf-999d-ad759cae98bc-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.596715 4714 generic.go:334] "Generic (PLEG): container finished" podID="73e3160b-2922-47cf-999d-ad759cae98bc" containerID="b78224b7eaf29fa5f35f5a14a7913fb0a21443ce839155a632681f4c3351fc14" exitCode=0 Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.596765 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-1" event={"ID":"73e3160b-2922-47cf-999d-ad759cae98bc","Type":"ContainerDied","Data":"b78224b7eaf29fa5f35f5a14a7913fb0a21443ce839155a632681f4c3351fc14"} Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.596797 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.596799 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-1" event={"ID":"73e3160b-2922-47cf-999d-ad759cae98bc","Type":"ContainerDied","Data":"e286b3bb2b844ed93f167bb8d414988c54c125f6caa42afdb169333e416d9923"} Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.596819 4714 scope.go:117] "RemoveContainer" containerID="887a921e7b634430b2e58452fdae19c42793bfa2ce2b387e0375f8cbedc76627" Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.623316 4714 scope.go:117] "RemoveContainer" containerID="b78224b7eaf29fa5f35f5a14a7913fb0a21443ce839155a632681f4c3351fc14" Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.643057 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-1"] Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.647449 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-1"] Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.654806 4714 scope.go:117] "RemoveContainer" containerID="887a921e7b634430b2e58452fdae19c42793bfa2ce2b387e0375f8cbedc76627" Jan 29 16:29:12 crc kubenswrapper[4714]: E0129 16:29:12.655334 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"887a921e7b634430b2e58452fdae19c42793bfa2ce2b387e0375f8cbedc76627\": container with ID starting with 887a921e7b634430b2e58452fdae19c42793bfa2ce2b387e0375f8cbedc76627 not found: ID does not exist" containerID="887a921e7b634430b2e58452fdae19c42793bfa2ce2b387e0375f8cbedc76627" Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.655402 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"887a921e7b634430b2e58452fdae19c42793bfa2ce2b387e0375f8cbedc76627"} err="failed to get container status \"887a921e7b634430b2e58452fdae19c42793bfa2ce2b387e0375f8cbedc76627\": rpc error: code = NotFound desc = could not find container \"887a921e7b634430b2e58452fdae19c42793bfa2ce2b387e0375f8cbedc76627\": container with ID starting with 887a921e7b634430b2e58452fdae19c42793bfa2ce2b387e0375f8cbedc76627 not found: ID does not exist" Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.655447 4714 scope.go:117] "RemoveContainer" containerID="b78224b7eaf29fa5f35f5a14a7913fb0a21443ce839155a632681f4c3351fc14" Jan 29 16:29:12 crc kubenswrapper[4714]: E0129 16:29:12.659448 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b78224b7eaf29fa5f35f5a14a7913fb0a21443ce839155a632681f4c3351fc14\": container with ID starting with b78224b7eaf29fa5f35f5a14a7913fb0a21443ce839155a632681f4c3351fc14 not found: ID does not exist" containerID="b78224b7eaf29fa5f35f5a14a7913fb0a21443ce839155a632681f4c3351fc14" Jan 29 16:29:12 crc kubenswrapper[4714]: I0129 16:29:12.659506 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b78224b7eaf29fa5f35f5a14a7913fb0a21443ce839155a632681f4c3351fc14"} err="failed to get container status \"b78224b7eaf29fa5f35f5a14a7913fb0a21443ce839155a632681f4c3351fc14\": rpc error: code = NotFound desc = could not find container \"b78224b7eaf29fa5f35f5a14a7913fb0a21443ce839155a632681f4c3351fc14\": container with ID starting with b78224b7eaf29fa5f35f5a14a7913fb0a21443ce839155a632681f4c3351fc14 not found: ID does not exist" Jan 29 16:29:13 crc kubenswrapper[4714]: I0129 16:29:13.897078 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-backup-1"] Jan 29 16:29:13 crc kubenswrapper[4714]: E0129 16:29:13.897626 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73e3160b-2922-47cf-999d-ad759cae98bc" containerName="cinder-scheduler" Jan 29 16:29:13 crc kubenswrapper[4714]: I0129 16:29:13.897641 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="73e3160b-2922-47cf-999d-ad759cae98bc" containerName="cinder-scheduler" Jan 29 16:29:13 crc kubenswrapper[4714]: E0129 16:29:13.897655 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67fa6a5a-7bfb-4079-8658-aca62c22bc73" containerName="cinder-scheduler" Jan 29 16:29:13 crc kubenswrapper[4714]: I0129 16:29:13.897663 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="67fa6a5a-7bfb-4079-8658-aca62c22bc73" containerName="cinder-scheduler" Jan 29 16:29:13 crc kubenswrapper[4714]: E0129 16:29:13.897686 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67fa6a5a-7bfb-4079-8658-aca62c22bc73" containerName="probe" Jan 29 16:29:13 crc kubenswrapper[4714]: I0129 16:29:13.897693 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="67fa6a5a-7bfb-4079-8658-aca62c22bc73" containerName="probe" Jan 29 16:29:13 crc kubenswrapper[4714]: E0129 16:29:13.897710 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73e3160b-2922-47cf-999d-ad759cae98bc" containerName="probe" Jan 29 16:29:13 crc kubenswrapper[4714]: I0129 16:29:13.897718 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="73e3160b-2922-47cf-999d-ad759cae98bc" containerName="probe" Jan 29 16:29:13 crc kubenswrapper[4714]: I0129 16:29:13.897863 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="73e3160b-2922-47cf-999d-ad759cae98bc" containerName="probe" Jan 29 16:29:13 crc kubenswrapper[4714]: I0129 16:29:13.897878 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="67fa6a5a-7bfb-4079-8658-aca62c22bc73" containerName="probe" Jan 29 16:29:13 crc kubenswrapper[4714]: I0129 16:29:13.897892 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="73e3160b-2922-47cf-999d-ad759cae98bc" containerName="cinder-scheduler" Jan 29 16:29:13 crc kubenswrapper[4714]: I0129 16:29:13.897902 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="67fa6a5a-7bfb-4079-8658-aca62c22bc73" containerName="cinder-scheduler" Jan 29 16:29:13 crc kubenswrapper[4714]: I0129 16:29:13.898742 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:13 crc kubenswrapper[4714]: I0129 16:29:13.907973 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-1"] Jan 29 16:29:13 crc kubenswrapper[4714]: I0129 16:29:13.939442 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-etc-iscsi\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:13 crc kubenswrapper[4714]: I0129 16:29:13.939485 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-dev\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:13 crc kubenswrapper[4714]: I0129 16:29:13.939502 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-etc-machine-id\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:13 crc kubenswrapper[4714]: I0129 16:29:13.939519 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1df413aa-ba11-47aa-9b37-989956046c9f-config-data\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:13 crc kubenswrapper[4714]: I0129 16:29:13.939558 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-var-locks-cinder\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:13 crc kubenswrapper[4714]: I0129 16:29:13.939577 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-sys\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:13 crc kubenswrapper[4714]: I0129 16:29:13.939600 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1df413aa-ba11-47aa-9b37-989956046c9f-scripts\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:13 crc kubenswrapper[4714]: I0129 16:29:13.939662 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-var-lib-cinder\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:13 crc kubenswrapper[4714]: I0129 16:29:13.939687 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-etc-nvme\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:13 crc kubenswrapper[4714]: I0129 16:29:13.939706 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-var-locks-brick\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:13 crc kubenswrapper[4714]: I0129 16:29:13.939723 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmcjf\" (UniqueName: \"kubernetes.io/projected/1df413aa-ba11-47aa-9b37-989956046c9f-kube-api-access-kmcjf\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:13 crc kubenswrapper[4714]: I0129 16:29:13.939744 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-lib-modules\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:13 crc kubenswrapper[4714]: I0129 16:29:13.939773 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1df413aa-ba11-47aa-9b37-989956046c9f-config-data-custom\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:13 crc kubenswrapper[4714]: I0129 16:29:13.939802 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-run\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.040654 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-var-locks-cinder\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.040712 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-sys\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.040756 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1df413aa-ba11-47aa-9b37-989956046c9f-scripts\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.040807 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-var-lib-cinder\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.040831 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-etc-nvme\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.040863 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-var-locks-brick\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.040909 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-var-locks-cinder\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.040992 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-etc-nvme\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.040990 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-sys\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.041044 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-var-locks-brick\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.041063 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmcjf\" (UniqueName: \"kubernetes.io/projected/1df413aa-ba11-47aa-9b37-989956046c9f-kube-api-access-kmcjf\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.041050 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-var-lib-cinder\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.041151 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-lib-modules\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.041201 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1df413aa-ba11-47aa-9b37-989956046c9f-config-data-custom\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.041253 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-run\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.041327 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-lib-modules\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.041454 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-run\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.041870 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-etc-iscsi\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.041920 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-etc-iscsi\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.041963 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-dev\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.042007 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-etc-machine-id\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.042047 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-dev\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.042055 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1df413aa-ba11-47aa-9b37-989956046c9f-config-data\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.042076 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-etc-machine-id\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.047063 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1df413aa-ba11-47aa-9b37-989956046c9f-scripts\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.048350 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1df413aa-ba11-47aa-9b37-989956046c9f-config-data-custom\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.048739 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1df413aa-ba11-47aa-9b37-989956046c9f-config-data\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.064388 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmcjf\" (UniqueName: \"kubernetes.io/projected/1df413aa-ba11-47aa-9b37-989956046c9f-kube-api-access-kmcjf\") pod \"cinder-backup-1\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.199063 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73e3160b-2922-47cf-999d-ad759cae98bc" path="/var/lib/kubelet/pods/73e3160b-2922-47cf-999d-ad759cae98bc/volumes" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.213680 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:14 crc kubenswrapper[4714]: I0129 16:29:14.660047 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-1"] Jan 29 16:29:15 crc kubenswrapper[4714]: I0129 16:29:15.630530 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-1" event={"ID":"1df413aa-ba11-47aa-9b37-989956046c9f","Type":"ContainerStarted","Data":"216659e8b1aeed2e1c0a3e238db1d9a8366df5059f31bffd8d1c1116d3eda51f"} Jan 29 16:29:15 crc kubenswrapper[4714]: I0129 16:29:15.631395 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-1" event={"ID":"1df413aa-ba11-47aa-9b37-989956046c9f","Type":"ContainerStarted","Data":"11899e2f2c2124b1f13dd44e2372c23241e6ddb29215a4c11ae4fe8ed891c9bb"} Jan 29 16:29:15 crc kubenswrapper[4714]: I0129 16:29:15.631434 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-1" event={"ID":"1df413aa-ba11-47aa-9b37-989956046c9f","Type":"ContainerStarted","Data":"267d12a21d0575c908ad4f4d9479c0c501da5cdcd406eba9e25bec597bbd362d"} Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.214477 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.401201 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.422941 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-backup-1" podStartSLOduration=6.422909442 podStartE2EDuration="6.422909442s" podCreationTimestamp="2026-01-29 16:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:29:15.664635134 +0000 UTC m=+1162.185136284" watchObservedRunningTime="2026-01-29 16:29:19.422909442 +0000 UTC m=+1165.943410562" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.724039 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-backup-2"] Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.727114 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.730811 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-2"] Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.832870 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-config-data\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.833308 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-etc-nvme\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.833353 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpsqq\" (UniqueName: \"kubernetes.io/projected/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-kube-api-access-xpsqq\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.833441 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-etc-machine-id\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.833519 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-var-locks-brick\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.833557 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-lib-modules\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.833593 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-var-locks-cinder\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.833630 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-sys\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.833656 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-config-data-custom\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.833719 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-dev\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.833809 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-etc-iscsi\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.833839 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-run\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.833880 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-scripts\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.834035 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-var-lib-cinder\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.935246 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-lib-modules\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.935324 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-var-locks-cinder\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.935365 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-sys\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.935396 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-config-data-custom\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.935435 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-dev\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.935476 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-dev\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.935395 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-lib-modules\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.935520 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-etc-iscsi\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.935440 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-sys\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.935516 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-var-locks-cinder\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.935549 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-run\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.935595 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-run\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.935608 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-scripts\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.935675 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-etc-iscsi\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.935776 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-var-lib-cinder\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.935686 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-var-lib-cinder\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.935890 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-config-data\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.935977 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-etc-nvme\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.936034 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpsqq\" (UniqueName: \"kubernetes.io/projected/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-kube-api-access-xpsqq\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.936115 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-etc-machine-id\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.936135 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-etc-nvme\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.936252 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-etc-machine-id\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.937008 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-var-locks-brick\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.936828 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-var-locks-brick\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.947349 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-config-data-custom\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.949746 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-scripts\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.950080 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-config-data\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:19 crc kubenswrapper[4714]: I0129 16:29:19.958109 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpsqq\" (UniqueName: \"kubernetes.io/projected/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-kube-api-access-xpsqq\") pod \"cinder-backup-2\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:20 crc kubenswrapper[4714]: I0129 16:29:20.047565 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:20 crc kubenswrapper[4714]: I0129 16:29:20.511482 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-2"] Jan 29 16:29:20 crc kubenswrapper[4714]: W0129 16:29:20.517105 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc48b6b3_54d9_4751_a508_b351b1d7fc5d.slice/crio-59d59370e14af6ba69c2be0d33017b635c7ada8221c2372e0298182f5ac46b4b WatchSource:0}: Error finding container 59d59370e14af6ba69c2be0d33017b635c7ada8221c2372e0298182f5ac46b4b: Status 404 returned error can't find the container with id 59d59370e14af6ba69c2be0d33017b635c7ada8221c2372e0298182f5ac46b4b Jan 29 16:29:20 crc kubenswrapper[4714]: I0129 16:29:20.668006 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-2" event={"ID":"cc48b6b3-54d9-4751-a508-b351b1d7fc5d","Type":"ContainerStarted","Data":"59d59370e14af6ba69c2be0d33017b635c7ada8221c2372e0298182f5ac46b4b"} Jan 29 16:29:21 crc kubenswrapper[4714]: E0129 16:29:21.186606 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:29:21 crc kubenswrapper[4714]: I0129 16:29:21.676723 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-2" event={"ID":"cc48b6b3-54d9-4751-a508-b351b1d7fc5d","Type":"ContainerStarted","Data":"3b310f0bce66621749bf8c882b4c5762ba209ef177b7c55ba2a999b2c1e8074a"} Jan 29 16:29:21 crc kubenswrapper[4714]: I0129 16:29:21.676768 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-2" event={"ID":"cc48b6b3-54d9-4751-a508-b351b1d7fc5d","Type":"ContainerStarted","Data":"2a756df7a344d06aa2fb997ad695f4c38702f1186154c912327c3b28f9495911"} Jan 29 16:29:21 crc kubenswrapper[4714]: I0129 16:29:21.699683 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-backup-2" podStartSLOduration=2.699661413 podStartE2EDuration="2.699661413s" podCreationTimestamp="2026-01-29 16:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:29:21.696542607 +0000 UTC m=+1168.217043747" watchObservedRunningTime="2026-01-29 16:29:21.699661413 +0000 UTC m=+1168.220162533" Jan 29 16:29:25 crc kubenswrapper[4714]: I0129 16:29:25.048120 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:25 crc kubenswrapper[4714]: I0129 16:29:25.184742 4714 scope.go:117] "RemoveContainer" containerID="f087542636afa3fd11613ddd38d7bd61a4a8cac1161d258a9bca9a6772f2da3c" Jan 29 16:29:25 crc kubenswrapper[4714]: I0129 16:29:25.184779 4714 scope.go:117] "RemoveContainer" containerID="cdf72985bc60f6ce06d587a8a7eb4e0fd4beed9399bf382db190b6ec4763fc7c" Jan 29 16:29:25 crc kubenswrapper[4714]: I0129 16:29:25.299710 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:25 crc kubenswrapper[4714]: I0129 16:29:25.709697 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"2e154d80-4b79-4f74-809e-c1c274ed4063","Type":"ContainerStarted","Data":"06b2f42073c400aedcdcd2bb5ec2c469776d6fe3bfbe8ce136fb5ad93196a673"} Jan 29 16:29:25 crc kubenswrapper[4714]: I0129 16:29:25.710100 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"2e154d80-4b79-4f74-809e-c1c274ed4063","Type":"ContainerStarted","Data":"bbd0c612c943b6ec94ed1610e6f355545ecaf388e165bbb67cb2c301b434e969"} Jan 29 16:29:26 crc kubenswrapper[4714]: I0129 16:29:26.322060 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-2"] Jan 29 16:29:26 crc kubenswrapper[4714]: I0129 16:29:26.715812 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-2" podUID="cc48b6b3-54d9-4751-a508-b351b1d7fc5d" containerName="cinder-backup" containerID="cri-o://2a756df7a344d06aa2fb997ad695f4c38702f1186154c912327c3b28f9495911" gracePeriod=30 Jan 29 16:29:26 crc kubenswrapper[4714]: I0129 16:29:26.715902 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-2" podUID="cc48b6b3-54d9-4751-a508-b351b1d7fc5d" containerName="probe" containerID="cri-o://3b310f0bce66621749bf8c882b4c5762ba209ef177b7c55ba2a999b2c1e8074a" gracePeriod=30 Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.714648 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.724458 4714 generic.go:334] "Generic (PLEG): container finished" podID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerID="06b2f42073c400aedcdcd2bb5ec2c469776d6fe3bfbe8ce136fb5ad93196a673" exitCode=1 Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.724502 4714 generic.go:334] "Generic (PLEG): container finished" podID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerID="bbd0c612c943b6ec94ed1610e6f355545ecaf388e165bbb67cb2c301b434e969" exitCode=1 Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.724549 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"2e154d80-4b79-4f74-809e-c1c274ed4063","Type":"ContainerDied","Data":"06b2f42073c400aedcdcd2bb5ec2c469776d6fe3bfbe8ce136fb5ad93196a673"} Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.724582 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"2e154d80-4b79-4f74-809e-c1c274ed4063","Type":"ContainerDied","Data":"bbd0c612c943b6ec94ed1610e6f355545ecaf388e165bbb67cb2c301b434e969"} Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.724602 4714 scope.go:117] "RemoveContainer" containerID="cdf72985bc60f6ce06d587a8a7eb4e0fd4beed9399bf382db190b6ec4763fc7c" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.725251 4714 scope.go:117] "RemoveContainer" containerID="bbd0c612c943b6ec94ed1610e6f355545ecaf388e165bbb67cb2c301b434e969" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.725306 4714 scope.go:117] "RemoveContainer" containerID="06b2f42073c400aedcdcd2bb5ec2c469776d6fe3bfbe8ce136fb5ad93196a673" Jan 29 16:29:27 crc kubenswrapper[4714]: E0129 16:29:27.725656 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(2e154d80-4b79-4f74-809e-c1c274ed4063)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 40s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(2e154d80-4b79-4f74-809e-c1c274ed4063)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.729732 4714 generic.go:334] "Generic (PLEG): container finished" podID="cc48b6b3-54d9-4751-a508-b351b1d7fc5d" containerID="3b310f0bce66621749bf8c882b4c5762ba209ef177b7c55ba2a999b2c1e8074a" exitCode=0 Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.729774 4714 generic.go:334] "Generic (PLEG): container finished" podID="cc48b6b3-54d9-4751-a508-b351b1d7fc5d" containerID="2a756df7a344d06aa2fb997ad695f4c38702f1186154c912327c3b28f9495911" exitCode=0 Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.729816 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-2" event={"ID":"cc48b6b3-54d9-4751-a508-b351b1d7fc5d","Type":"ContainerDied","Data":"3b310f0bce66621749bf8c882b4c5762ba209ef177b7c55ba2a999b2c1e8074a"} Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.729851 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-2" event={"ID":"cc48b6b3-54d9-4751-a508-b351b1d7fc5d","Type":"ContainerDied","Data":"2a756df7a344d06aa2fb997ad695f4c38702f1186154c912327c3b28f9495911"} Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.729864 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-2" event={"ID":"cc48b6b3-54d9-4751-a508-b351b1d7fc5d","Type":"ContainerDied","Data":"59d59370e14af6ba69c2be0d33017b635c7ada8221c2372e0298182f5ac46b4b"} Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.729926 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-2" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.782524 4714 scope.go:117] "RemoveContainer" containerID="f087542636afa3fd11613ddd38d7bd61a4a8cac1161d258a9bca9a6772f2da3c" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.823552 4714 scope.go:117] "RemoveContainer" containerID="3b310f0bce66621749bf8c882b4c5762ba209ef177b7c55ba2a999b2c1e8074a" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.837836 4714 scope.go:117] "RemoveContainer" containerID="2a756df7a344d06aa2fb997ad695f4c38702f1186154c912327c3b28f9495911" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.855747 4714 scope.go:117] "RemoveContainer" containerID="3b310f0bce66621749bf8c882b4c5762ba209ef177b7c55ba2a999b2c1e8074a" Jan 29 16:29:27 crc kubenswrapper[4714]: E0129 16:29:27.856265 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b310f0bce66621749bf8c882b4c5762ba209ef177b7c55ba2a999b2c1e8074a\": container with ID starting with 3b310f0bce66621749bf8c882b4c5762ba209ef177b7c55ba2a999b2c1e8074a not found: ID does not exist" containerID="3b310f0bce66621749bf8c882b4c5762ba209ef177b7c55ba2a999b2c1e8074a" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.856318 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b310f0bce66621749bf8c882b4c5762ba209ef177b7c55ba2a999b2c1e8074a"} err="failed to get container status \"3b310f0bce66621749bf8c882b4c5762ba209ef177b7c55ba2a999b2c1e8074a\": rpc error: code = NotFound desc = could not find container \"3b310f0bce66621749bf8c882b4c5762ba209ef177b7c55ba2a999b2c1e8074a\": container with ID starting with 3b310f0bce66621749bf8c882b4c5762ba209ef177b7c55ba2a999b2c1e8074a not found: ID does not exist" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.856350 4714 scope.go:117] "RemoveContainer" containerID="2a756df7a344d06aa2fb997ad695f4c38702f1186154c912327c3b28f9495911" Jan 29 16:29:27 crc kubenswrapper[4714]: E0129 16:29:27.856693 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a756df7a344d06aa2fb997ad695f4c38702f1186154c912327c3b28f9495911\": container with ID starting with 2a756df7a344d06aa2fb997ad695f4c38702f1186154c912327c3b28f9495911 not found: ID does not exist" containerID="2a756df7a344d06aa2fb997ad695f4c38702f1186154c912327c3b28f9495911" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.856728 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a756df7a344d06aa2fb997ad695f4c38702f1186154c912327c3b28f9495911"} err="failed to get container status \"2a756df7a344d06aa2fb997ad695f4c38702f1186154c912327c3b28f9495911\": rpc error: code = NotFound desc = could not find container \"2a756df7a344d06aa2fb997ad695f4c38702f1186154c912327c3b28f9495911\": container with ID starting with 2a756df7a344d06aa2fb997ad695f4c38702f1186154c912327c3b28f9495911 not found: ID does not exist" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.856752 4714 scope.go:117] "RemoveContainer" containerID="3b310f0bce66621749bf8c882b4c5762ba209ef177b7c55ba2a999b2c1e8074a" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.857099 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b310f0bce66621749bf8c882b4c5762ba209ef177b7c55ba2a999b2c1e8074a"} err="failed to get container status \"3b310f0bce66621749bf8c882b4c5762ba209ef177b7c55ba2a999b2c1e8074a\": rpc error: code = NotFound desc = could not find container \"3b310f0bce66621749bf8c882b4c5762ba209ef177b7c55ba2a999b2c1e8074a\": container with ID starting with 3b310f0bce66621749bf8c882b4c5762ba209ef177b7c55ba2a999b2c1e8074a not found: ID does not exist" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.857120 4714 scope.go:117] "RemoveContainer" containerID="2a756df7a344d06aa2fb997ad695f4c38702f1186154c912327c3b28f9495911" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.857352 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a756df7a344d06aa2fb997ad695f4c38702f1186154c912327c3b28f9495911"} err="failed to get container status \"2a756df7a344d06aa2fb997ad695f4c38702f1186154c912327c3b28f9495911\": rpc error: code = NotFound desc = could not find container \"2a756df7a344d06aa2fb997ad695f4c38702f1186154c912327c3b28f9495911\": container with ID starting with 2a756df7a344d06aa2fb997ad695f4c38702f1186154c912327c3b28f9495911 not found: ID does not exist" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.869183 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-etc-iscsi\") pod \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.869240 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-config-data-custom\") pod \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.869260 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-etc-nvme\") pod \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.869285 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-sys\") pod \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.869322 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpsqq\" (UniqueName: \"kubernetes.io/projected/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-kube-api-access-xpsqq\") pod \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.869339 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-lib-modules\") pod \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.869340 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "cc48b6b3-54d9-4751-a508-b351b1d7fc5d" (UID: "cc48b6b3-54d9-4751-a508-b351b1d7fc5d"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.869370 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-run\") pod \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.869390 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-var-locks-cinder\") pod \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.869407 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-sys" (OuterVolumeSpecName: "sys") pod "cc48b6b3-54d9-4751-a508-b351b1d7fc5d" (UID: "cc48b6b3-54d9-4751-a508-b351b1d7fc5d"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.869437 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-var-locks-brick\") pod \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.869462 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-config-data\") pod \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.869483 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-dev\") pod \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.869510 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-scripts\") pod \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.869557 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-etc-machine-id\") pod \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.869578 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-var-lib-cinder\") pod \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\" (UID: \"cc48b6b3-54d9-4751-a508-b351b1d7fc5d\") " Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.869844 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "cc48b6b3-54d9-4751-a508-b351b1d7fc5d" (UID: "cc48b6b3-54d9-4751-a508-b351b1d7fc5d"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.870156 4714 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.870174 4714 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.870188 4714 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-sys\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.870301 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-dev" (OuterVolumeSpecName: "dev") pod "cc48b6b3-54d9-4751-a508-b351b1d7fc5d" (UID: "cc48b6b3-54d9-4751-a508-b351b1d7fc5d"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.870328 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "cc48b6b3-54d9-4751-a508-b351b1d7fc5d" (UID: "cc48b6b3-54d9-4751-a508-b351b1d7fc5d"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.870356 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cc48b6b3-54d9-4751-a508-b351b1d7fc5d" (UID: "cc48b6b3-54d9-4751-a508-b351b1d7fc5d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.870387 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "cc48b6b3-54d9-4751-a508-b351b1d7fc5d" (UID: "cc48b6b3-54d9-4751-a508-b351b1d7fc5d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.870414 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "cc48b6b3-54d9-4751-a508-b351b1d7fc5d" (UID: "cc48b6b3-54d9-4751-a508-b351b1d7fc5d"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.870415 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-run" (OuterVolumeSpecName: "run") pod "cc48b6b3-54d9-4751-a508-b351b1d7fc5d" (UID: "cc48b6b3-54d9-4751-a508-b351b1d7fc5d"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.870435 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "cc48b6b3-54d9-4751-a508-b351b1d7fc5d" (UID: "cc48b6b3-54d9-4751-a508-b351b1d7fc5d"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.874454 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-scripts" (OuterVolumeSpecName: "scripts") pod "cc48b6b3-54d9-4751-a508-b351b1d7fc5d" (UID: "cc48b6b3-54d9-4751-a508-b351b1d7fc5d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.874468 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-kube-api-access-xpsqq" (OuterVolumeSpecName: "kube-api-access-xpsqq") pod "cc48b6b3-54d9-4751-a508-b351b1d7fc5d" (UID: "cc48b6b3-54d9-4751-a508-b351b1d7fc5d"). InnerVolumeSpecName "kube-api-access-xpsqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.874629 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cc48b6b3-54d9-4751-a508-b351b1d7fc5d" (UID: "cc48b6b3-54d9-4751-a508-b351b1d7fc5d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.936522 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-config-data" (OuterVolumeSpecName: "config-data") pod "cc48b6b3-54d9-4751-a508-b351b1d7fc5d" (UID: "cc48b6b3-54d9-4751-a508-b351b1d7fc5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.971508 4714 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.971549 4714 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.971564 4714 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.971580 4714 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.971593 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpsqq\" (UniqueName: \"kubernetes.io/projected/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-kube-api-access-xpsqq\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.971607 4714 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.971619 4714 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-run\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.971633 4714 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.971645 4714 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.971657 4714 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:27 crc kubenswrapper[4714]: I0129 16:29:27.971668 4714 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cc48b6b3-54d9-4751-a508-b351b1d7fc5d-dev\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4714]: I0129 16:29:28.067138 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-2"] Jan 29 16:29:28 crc kubenswrapper[4714]: I0129 16:29:28.082511 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-backup-2"] Jan 29 16:29:28 crc kubenswrapper[4714]: I0129 16:29:28.088831 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-1"] Jan 29 16:29:28 crc kubenswrapper[4714]: I0129 16:29:28.089254 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-1" podUID="1df413aa-ba11-47aa-9b37-989956046c9f" containerName="probe" containerID="cri-o://216659e8b1aeed2e1c0a3e238db1d9a8366df5059f31bffd8d1c1116d3eda51f" gracePeriod=30 Jan 29 16:29:28 crc kubenswrapper[4714]: I0129 16:29:28.089237 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-1" podUID="1df413aa-ba11-47aa-9b37-989956046c9f" containerName="cinder-backup" containerID="cri-o://11899e2f2c2124b1f13dd44e2372c23241e6ddb29215a4c11ae4fe8ed891c9bb" gracePeriod=30 Jan 29 16:29:28 crc kubenswrapper[4714]: I0129 16:29:28.194486 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc48b6b3-54d9-4751-a508-b351b1d7fc5d" path="/var/lib/kubelet/pods/cc48b6b3-54d9-4751-a508-b351b1d7fc5d/volumes" Jan 29 16:29:28 crc kubenswrapper[4714]: I0129 16:29:28.745308 4714 generic.go:334] "Generic (PLEG): container finished" podID="1df413aa-ba11-47aa-9b37-989956046c9f" containerID="216659e8b1aeed2e1c0a3e238db1d9a8366df5059f31bffd8d1c1116d3eda51f" exitCode=0 Jan 29 16:29:28 crc kubenswrapper[4714]: I0129 16:29:28.745402 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-1" event={"ID":"1df413aa-ba11-47aa-9b37-989956046c9f","Type":"ContainerDied","Data":"216659e8b1aeed2e1c0a3e238db1d9a8366df5059f31bffd8d1c1116d3eda51f"} Jan 29 16:29:29 crc kubenswrapper[4714]: I0129 16:29:29.887304 4714 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:29:29 crc kubenswrapper[4714]: I0129 16:29:29.887562 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:29:29 crc kubenswrapper[4714]: I0129 16:29:29.887572 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:29:29 crc kubenswrapper[4714]: I0129 16:29:29.888126 4714 scope.go:117] "RemoveContainer" containerID="bbd0c612c943b6ec94ed1610e6f355545ecaf388e165bbb67cb2c301b434e969" Jan 29 16:29:29 crc kubenswrapper[4714]: I0129 16:29:29.888137 4714 scope.go:117] "RemoveContainer" containerID="06b2f42073c400aedcdcd2bb5ec2c469776d6fe3bfbe8ce136fb5ad93196a673" Jan 29 16:29:29 crc kubenswrapper[4714]: E0129 16:29:29.888406 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(2e154d80-4b79-4f74-809e-c1c274ed4063)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 40s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(2e154d80-4b79-4f74-809e-c1c274ed4063)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" Jan 29 16:29:31 crc kubenswrapper[4714]: I0129 16:29:31.776526 4714 generic.go:334] "Generic (PLEG): container finished" podID="1df413aa-ba11-47aa-9b37-989956046c9f" containerID="11899e2f2c2124b1f13dd44e2372c23241e6ddb29215a4c11ae4fe8ed891c9bb" exitCode=0 Jan 29 16:29:31 crc kubenswrapper[4714]: I0129 16:29:31.776666 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-1" event={"ID":"1df413aa-ba11-47aa-9b37-989956046c9f","Type":"ContainerDied","Data":"11899e2f2c2124b1f13dd44e2372c23241e6ddb29215a4c11ae4fe8ed891c9bb"} Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.243008 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.350234 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1df413aa-ba11-47aa-9b37-989956046c9f-scripts\") pod \"1df413aa-ba11-47aa-9b37-989956046c9f\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.350296 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-run\") pod \"1df413aa-ba11-47aa-9b37-989956046c9f\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.350325 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-lib-modules\") pod \"1df413aa-ba11-47aa-9b37-989956046c9f\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.350376 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmcjf\" (UniqueName: \"kubernetes.io/projected/1df413aa-ba11-47aa-9b37-989956046c9f-kube-api-access-kmcjf\") pod \"1df413aa-ba11-47aa-9b37-989956046c9f\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.350393 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-etc-machine-id\") pod \"1df413aa-ba11-47aa-9b37-989956046c9f\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.350409 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1df413aa-ba11-47aa-9b37-989956046c9f-config-data\") pod \"1df413aa-ba11-47aa-9b37-989956046c9f\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.350426 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-etc-iscsi\") pod \"1df413aa-ba11-47aa-9b37-989956046c9f\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.350457 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-var-locks-cinder\") pod \"1df413aa-ba11-47aa-9b37-989956046c9f\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.350476 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-dev\") pod \"1df413aa-ba11-47aa-9b37-989956046c9f\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.350500 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1df413aa-ba11-47aa-9b37-989956046c9f-config-data-custom\") pod \"1df413aa-ba11-47aa-9b37-989956046c9f\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.350559 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-sys\") pod \"1df413aa-ba11-47aa-9b37-989956046c9f\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.350575 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-etc-nvme\") pod \"1df413aa-ba11-47aa-9b37-989956046c9f\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.350589 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-var-locks-brick\") pod \"1df413aa-ba11-47aa-9b37-989956046c9f\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.350615 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-var-lib-cinder\") pod \"1df413aa-ba11-47aa-9b37-989956046c9f\" (UID: \"1df413aa-ba11-47aa-9b37-989956046c9f\") " Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.351491 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "1df413aa-ba11-47aa-9b37-989956046c9f" (UID: "1df413aa-ba11-47aa-9b37-989956046c9f"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.351582 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "1df413aa-ba11-47aa-9b37-989956046c9f" (UID: "1df413aa-ba11-47aa-9b37-989956046c9f"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.351589 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1df413aa-ba11-47aa-9b37-989956046c9f" (UID: "1df413aa-ba11-47aa-9b37-989956046c9f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.351623 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-dev" (OuterVolumeSpecName: "dev") pod "1df413aa-ba11-47aa-9b37-989956046c9f" (UID: "1df413aa-ba11-47aa-9b37-989956046c9f"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.351643 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-sys" (OuterVolumeSpecName: "sys") pod "1df413aa-ba11-47aa-9b37-989956046c9f" (UID: "1df413aa-ba11-47aa-9b37-989956046c9f"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.351640 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "1df413aa-ba11-47aa-9b37-989956046c9f" (UID: "1df413aa-ba11-47aa-9b37-989956046c9f"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.351669 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-run" (OuterVolumeSpecName: "run") pod "1df413aa-ba11-47aa-9b37-989956046c9f" (UID: "1df413aa-ba11-47aa-9b37-989956046c9f"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.351670 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "1df413aa-ba11-47aa-9b37-989956046c9f" (UID: "1df413aa-ba11-47aa-9b37-989956046c9f"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.351689 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1df413aa-ba11-47aa-9b37-989956046c9f" (UID: "1df413aa-ba11-47aa-9b37-989956046c9f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.352332 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "1df413aa-ba11-47aa-9b37-989956046c9f" (UID: "1df413aa-ba11-47aa-9b37-989956046c9f"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.370371 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1df413aa-ba11-47aa-9b37-989956046c9f-scripts" (OuterVolumeSpecName: "scripts") pod "1df413aa-ba11-47aa-9b37-989956046c9f" (UID: "1df413aa-ba11-47aa-9b37-989956046c9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.370389 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1df413aa-ba11-47aa-9b37-989956046c9f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1df413aa-ba11-47aa-9b37-989956046c9f" (UID: "1df413aa-ba11-47aa-9b37-989956046c9f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.370425 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1df413aa-ba11-47aa-9b37-989956046c9f-kube-api-access-kmcjf" (OuterVolumeSpecName: "kube-api-access-kmcjf") pod "1df413aa-ba11-47aa-9b37-989956046c9f" (UID: "1df413aa-ba11-47aa-9b37-989956046c9f"). InnerVolumeSpecName "kube-api-access-kmcjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.438943 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1df413aa-ba11-47aa-9b37-989956046c9f-config-data" (OuterVolumeSpecName: "config-data") pod "1df413aa-ba11-47aa-9b37-989956046c9f" (UID: "1df413aa-ba11-47aa-9b37-989956046c9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.453142 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmcjf\" (UniqueName: \"kubernetes.io/projected/1df413aa-ba11-47aa-9b37-989956046c9f-kube-api-access-kmcjf\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.453188 4714 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1df413aa-ba11-47aa-9b37-989956046c9f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.453204 4714 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.453216 4714 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.453229 4714 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.453240 4714 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-dev\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.453252 4714 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1df413aa-ba11-47aa-9b37-989956046c9f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.453263 4714 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-sys\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.453275 4714 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.453287 4714 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.453299 4714 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.453311 4714 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1df413aa-ba11-47aa-9b37-989956046c9f-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.453322 4714 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-run\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.453333 4714 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1df413aa-ba11-47aa-9b37-989956046c9f-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.788003 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-1" event={"ID":"1df413aa-ba11-47aa-9b37-989956046c9f","Type":"ContainerDied","Data":"267d12a21d0575c908ad4f4d9479c0c501da5cdcd406eba9e25bec597bbd362d"} Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.788379 4714 scope.go:117] "RemoveContainer" containerID="216659e8b1aeed2e1c0a3e238db1d9a8366df5059f31bffd8d1c1116d3eda51f" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.788110 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-1" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.817045 4714 scope.go:117] "RemoveContainer" containerID="11899e2f2c2124b1f13dd44e2372c23241e6ddb29215a4c11ae4fe8ed891c9bb" Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.848876 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-1"] Jan 29 16:29:32 crc kubenswrapper[4714]: I0129 16:29:32.854074 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-backup-1"] Jan 29 16:29:33 crc kubenswrapper[4714]: I0129 16:29:33.616022 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 29 16:29:33 crc kubenswrapper[4714]: I0129 16:29:33.616654 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-0" podUID="9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3" containerName="cinder-api-log" containerID="cri-o://79461c5a69e4029a68e7e8faa95d70a67ee30b1d8f5d9b8793dd7214e3da2c42" gracePeriod=30 Jan 29 16:29:33 crc kubenswrapper[4714]: I0129 16:29:33.616786 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-0" podUID="9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3" containerName="cinder-api" containerID="cri-o://e1b4f5a218866bd462e639ef9e2c44453f1907feb94c13e36f29e0c48f866074" gracePeriod=30 Jan 29 16:29:33 crc kubenswrapper[4714]: I0129 16:29:33.794699 4714 generic.go:334] "Generic (PLEG): container finished" podID="9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3" containerID="79461c5a69e4029a68e7e8faa95d70a67ee30b1d8f5d9b8793dd7214e3da2c42" exitCode=143 Jan 29 16:29:33 crc kubenswrapper[4714]: I0129 16:29:33.794761 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3","Type":"ContainerDied","Data":"79461c5a69e4029a68e7e8faa95d70a67ee30b1d8f5d9b8793dd7214e3da2c42"} Jan 29 16:29:34 crc kubenswrapper[4714]: I0129 16:29:34.198459 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1df413aa-ba11-47aa-9b37-989956046c9f" path="/var/lib/kubelet/pods/1df413aa-ba11-47aa-9b37-989956046c9f/volumes" Jan 29 16:29:35 crc kubenswrapper[4714]: E0129 16:29:35.188150 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:29:36 crc kubenswrapper[4714]: I0129 16:29:36.766656 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="cinder-kuttl-tests/cinder-api-0" podUID="9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.93:8776/healthcheck\": read tcp 10.217.0.2:49470->10.217.0.93:8776: read: connection reset by peer" Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.745804 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.828616 4714 generic.go:334] "Generic (PLEG): container finished" podID="9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3" containerID="e1b4f5a218866bd462e639ef9e2c44453f1907feb94c13e36f29e0c48f866074" exitCode=0 Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.828662 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3","Type":"ContainerDied","Data":"e1b4f5a218866bd462e639ef9e2c44453f1907feb94c13e36f29e0c48f866074"} Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.828690 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3","Type":"ContainerDied","Data":"4d4f411f7f92df4f5d547c75bfc715baa406061177023b383b53537927256cc7"} Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.828706 4714 scope.go:117] "RemoveContainer" containerID="e1b4f5a218866bd462e639ef9e2c44453f1907feb94c13e36f29e0c48f866074" Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.828835 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.849762 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-config-data-custom\") pod \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\" (UID: \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\") " Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.849870 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-796r4\" (UniqueName: \"kubernetes.io/projected/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-kube-api-access-796r4\") pod \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\" (UID: \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\") " Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.849961 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-scripts\") pod \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\" (UID: \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\") " Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.849988 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-logs\") pod \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\" (UID: \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\") " Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.850013 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-config-data\") pod \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\" (UID: \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\") " Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.850038 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-etc-machine-id\") pod \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\" (UID: \"9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3\") " Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.850320 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3" (UID: "9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.851213 4714 scope.go:117] "RemoveContainer" containerID="79461c5a69e4029a68e7e8faa95d70a67ee30b1d8f5d9b8793dd7214e3da2c42" Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.851615 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-logs" (OuterVolumeSpecName: "logs") pod "9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3" (UID: "9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.858370 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-scripts" (OuterVolumeSpecName: "scripts") pod "9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3" (UID: "9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.861334 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-kube-api-access-796r4" (OuterVolumeSpecName: "kube-api-access-796r4") pod "9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3" (UID: "9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3"). InnerVolumeSpecName "kube-api-access-796r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.867201 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3" (UID: "9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.891079 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-config-data" (OuterVolumeSpecName: "config-data") pod "9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3" (UID: "9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.944133 4714 scope.go:117] "RemoveContainer" containerID="e1b4f5a218866bd462e639ef9e2c44453f1907feb94c13e36f29e0c48f866074" Jan 29 16:29:37 crc kubenswrapper[4714]: E0129 16:29:37.944618 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1b4f5a218866bd462e639ef9e2c44453f1907feb94c13e36f29e0c48f866074\": container with ID starting with e1b4f5a218866bd462e639ef9e2c44453f1907feb94c13e36f29e0c48f866074 not found: ID does not exist" containerID="e1b4f5a218866bd462e639ef9e2c44453f1907feb94c13e36f29e0c48f866074" Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.944662 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1b4f5a218866bd462e639ef9e2c44453f1907feb94c13e36f29e0c48f866074"} err="failed to get container status \"e1b4f5a218866bd462e639ef9e2c44453f1907feb94c13e36f29e0c48f866074\": rpc error: code = NotFound desc = could not find container \"e1b4f5a218866bd462e639ef9e2c44453f1907feb94c13e36f29e0c48f866074\": container with ID starting with e1b4f5a218866bd462e639ef9e2c44453f1907feb94c13e36f29e0c48f866074 not found: ID does not exist" Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.944690 4714 scope.go:117] "RemoveContainer" containerID="79461c5a69e4029a68e7e8faa95d70a67ee30b1d8f5d9b8793dd7214e3da2c42" Jan 29 16:29:37 crc kubenswrapper[4714]: E0129 16:29:37.944959 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79461c5a69e4029a68e7e8faa95d70a67ee30b1d8f5d9b8793dd7214e3da2c42\": container with ID starting with 79461c5a69e4029a68e7e8faa95d70a67ee30b1d8f5d9b8793dd7214e3da2c42 not found: ID does not exist" containerID="79461c5a69e4029a68e7e8faa95d70a67ee30b1d8f5d9b8793dd7214e3da2c42" Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.944988 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79461c5a69e4029a68e7e8faa95d70a67ee30b1d8f5d9b8793dd7214e3da2c42"} err="failed to get container status \"79461c5a69e4029a68e7e8faa95d70a67ee30b1d8f5d9b8793dd7214e3da2c42\": rpc error: code = NotFound desc = could not find container \"79461c5a69e4029a68e7e8faa95d70a67ee30b1d8f5d9b8793dd7214e3da2c42\": container with ID starting with 79461c5a69e4029a68e7e8faa95d70a67ee30b1d8f5d9b8793dd7214e3da2c42 not found: ID does not exist" Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.952100 4714 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.952123 4714 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.952132 4714 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.952140 4714 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.952149 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-796r4\" (UniqueName: \"kubernetes.io/projected/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-kube-api-access-796r4\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:37 crc kubenswrapper[4714]: I0129 16:29:37.952157 4714 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:38 crc kubenswrapper[4714]: I0129 16:29:38.166049 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 29 16:29:38 crc kubenswrapper[4714]: I0129 16:29:38.174067 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 29 16:29:38 crc kubenswrapper[4714]: I0129 16:29:38.198684 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3" path="/var/lib/kubelet/pods/9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3/volumes" Jan 29 16:29:38 crc kubenswrapper[4714]: I0129 16:29:38.945624 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 29 16:29:38 crc kubenswrapper[4714]: E0129 16:29:38.946621 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3" containerName="cinder-api-log" Jan 29 16:29:38 crc kubenswrapper[4714]: I0129 16:29:38.946652 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3" containerName="cinder-api-log" Jan 29 16:29:38 crc kubenswrapper[4714]: E0129 16:29:38.946683 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3" containerName="cinder-api" Jan 29 16:29:38 crc kubenswrapper[4714]: I0129 16:29:38.946700 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3" containerName="cinder-api" Jan 29 16:29:38 crc kubenswrapper[4714]: E0129 16:29:38.946718 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc48b6b3-54d9-4751-a508-b351b1d7fc5d" containerName="cinder-backup" Jan 29 16:29:38 crc kubenswrapper[4714]: I0129 16:29:38.946733 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc48b6b3-54d9-4751-a508-b351b1d7fc5d" containerName="cinder-backup" Jan 29 16:29:38 crc kubenswrapper[4714]: E0129 16:29:38.946774 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1df413aa-ba11-47aa-9b37-989956046c9f" containerName="cinder-backup" Jan 29 16:29:38 crc kubenswrapper[4714]: I0129 16:29:38.946789 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="1df413aa-ba11-47aa-9b37-989956046c9f" containerName="cinder-backup" Jan 29 16:29:38 crc kubenswrapper[4714]: E0129 16:29:38.946822 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc48b6b3-54d9-4751-a508-b351b1d7fc5d" containerName="probe" Jan 29 16:29:38 crc kubenswrapper[4714]: I0129 16:29:38.946837 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc48b6b3-54d9-4751-a508-b351b1d7fc5d" containerName="probe" Jan 29 16:29:38 crc kubenswrapper[4714]: E0129 16:29:38.946869 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1df413aa-ba11-47aa-9b37-989956046c9f" containerName="probe" Jan 29 16:29:38 crc kubenswrapper[4714]: I0129 16:29:38.946886 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="1df413aa-ba11-47aa-9b37-989956046c9f" containerName="probe" Jan 29 16:29:38 crc kubenswrapper[4714]: I0129 16:29:38.947234 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3" containerName="cinder-api" Jan 29 16:29:38 crc kubenswrapper[4714]: I0129 16:29:38.947257 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="1df413aa-ba11-47aa-9b37-989956046c9f" containerName="cinder-backup" Jan 29 16:29:38 crc kubenswrapper[4714]: I0129 16:29:38.947274 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc48b6b3-54d9-4751-a508-b351b1d7fc5d" containerName="probe" Jan 29 16:29:38 crc kubenswrapper[4714]: I0129 16:29:38.947302 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b21fd86-5b9b-4b13-82aa-eb3d7f1fafb3" containerName="cinder-api-log" Jan 29 16:29:38 crc kubenswrapper[4714]: I0129 16:29:38.947329 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="1df413aa-ba11-47aa-9b37-989956046c9f" containerName="probe" Jan 29 16:29:38 crc kubenswrapper[4714]: I0129 16:29:38.947356 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc48b6b3-54d9-4751-a508-b351b1d7fc5d" containerName="cinder-backup" Jan 29 16:29:38 crc kubenswrapper[4714]: I0129 16:29:38.948913 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:29:38 crc kubenswrapper[4714]: I0129 16:29:38.952259 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-api-2"] Jan 29 16:29:38 crc kubenswrapper[4714]: I0129 16:29:38.953374 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-2" Jan 29 16:29:38 crc kubenswrapper[4714]: I0129 16:29:38.962493 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-api-1"] Jan 29 16:29:38 crc kubenswrapper[4714]: I0129 16:29:38.963813 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-1" Jan 29 16:29:38 crc kubenswrapper[4714]: I0129 16:29:38.968226 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-api-config-data" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.067637 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-1"] Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.072164 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5a3f592-54c6-44f1-9f09-f366502287a6-config-data-custom\") pod \"cinder-api-0\" (UID: \"c5a3f592-54c6-44f1-9f09-f366502287a6\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.072212 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/869a701a-040e-44ea-98cc-53eb5f33c933-scripts\") pod \"cinder-api-2\" (UID: \"869a701a-040e-44ea-98cc-53eb5f33c933\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.072235 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk7jf\" (UniqueName: \"kubernetes.io/projected/84becd41-a7aa-4b36-a6a7-8516c2e64909-kube-api-access-xk7jf\") pod \"cinder-api-1\" (UID: \"84becd41-a7aa-4b36-a6a7-8516c2e64909\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.072253 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p55ww\" (UniqueName: \"kubernetes.io/projected/c5a3f592-54c6-44f1-9f09-f366502287a6-kube-api-access-p55ww\") pod \"cinder-api-0\" (UID: \"c5a3f592-54c6-44f1-9f09-f366502287a6\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.072269 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/869a701a-040e-44ea-98cc-53eb5f33c933-config-data\") pod \"cinder-api-2\" (UID: \"869a701a-040e-44ea-98cc-53eb5f33c933\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.072286 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84becd41-a7aa-4b36-a6a7-8516c2e64909-config-data\") pod \"cinder-api-1\" (UID: \"84becd41-a7aa-4b36-a6a7-8516c2e64909\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.072306 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5a3f592-54c6-44f1-9f09-f366502287a6-scripts\") pod \"cinder-api-0\" (UID: \"c5a3f592-54c6-44f1-9f09-f366502287a6\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.072327 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/869a701a-040e-44ea-98cc-53eb5f33c933-config-data-custom\") pod \"cinder-api-2\" (UID: \"869a701a-040e-44ea-98cc-53eb5f33c933\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.072353 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84becd41-a7aa-4b36-a6a7-8516c2e64909-etc-machine-id\") pod \"cinder-api-1\" (UID: \"84becd41-a7aa-4b36-a6a7-8516c2e64909\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.072371 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5a3f592-54c6-44f1-9f09-f366502287a6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c5a3f592-54c6-44f1-9f09-f366502287a6\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.072385 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84becd41-a7aa-4b36-a6a7-8516c2e64909-scripts\") pod \"cinder-api-1\" (UID: \"84becd41-a7aa-4b36-a6a7-8516c2e64909\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.072399 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/869a701a-040e-44ea-98cc-53eb5f33c933-etc-machine-id\") pod \"cinder-api-2\" (UID: \"869a701a-040e-44ea-98cc-53eb5f33c933\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.072414 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84becd41-a7aa-4b36-a6a7-8516c2e64909-config-data-custom\") pod \"cinder-api-1\" (UID: \"84becd41-a7aa-4b36-a6a7-8516c2e64909\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.072432 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84becd41-a7aa-4b36-a6a7-8516c2e64909-logs\") pod \"cinder-api-1\" (UID: \"84becd41-a7aa-4b36-a6a7-8516c2e64909\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.072446 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a3f592-54c6-44f1-9f09-f366502287a6-config-data\") pod \"cinder-api-0\" (UID: \"c5a3f592-54c6-44f1-9f09-f366502287a6\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.072462 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5a3f592-54c6-44f1-9f09-f366502287a6-logs\") pod \"cinder-api-0\" (UID: \"c5a3f592-54c6-44f1-9f09-f366502287a6\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.072476 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/869a701a-040e-44ea-98cc-53eb5f33c933-logs\") pod \"cinder-api-2\" (UID: \"869a701a-040e-44ea-98cc-53eb5f33c933\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.072515 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zvnq\" (UniqueName: \"kubernetes.io/projected/869a701a-040e-44ea-98cc-53eb5f33c933-kube-api-access-4zvnq\") pod \"cinder-api-2\" (UID: \"869a701a-040e-44ea-98cc-53eb5f33c933\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.079322 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.087008 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-2"] Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.173708 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/869a701a-040e-44ea-98cc-53eb5f33c933-etc-machine-id\") pod \"cinder-api-2\" (UID: \"869a701a-040e-44ea-98cc-53eb5f33c933\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.173763 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84becd41-a7aa-4b36-a6a7-8516c2e64909-config-data-custom\") pod \"cinder-api-1\" (UID: \"84becd41-a7aa-4b36-a6a7-8516c2e64909\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.173781 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84becd41-a7aa-4b36-a6a7-8516c2e64909-logs\") pod \"cinder-api-1\" (UID: \"84becd41-a7aa-4b36-a6a7-8516c2e64909\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.173803 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a3f592-54c6-44f1-9f09-f366502287a6-config-data\") pod \"cinder-api-0\" (UID: \"c5a3f592-54c6-44f1-9f09-f366502287a6\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.173822 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5a3f592-54c6-44f1-9f09-f366502287a6-logs\") pod \"cinder-api-0\" (UID: \"c5a3f592-54c6-44f1-9f09-f366502287a6\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.173837 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/869a701a-040e-44ea-98cc-53eb5f33c933-logs\") pod \"cinder-api-2\" (UID: \"869a701a-040e-44ea-98cc-53eb5f33c933\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.173877 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zvnq\" (UniqueName: \"kubernetes.io/projected/869a701a-040e-44ea-98cc-53eb5f33c933-kube-api-access-4zvnq\") pod \"cinder-api-2\" (UID: \"869a701a-040e-44ea-98cc-53eb5f33c933\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.173917 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5a3f592-54c6-44f1-9f09-f366502287a6-config-data-custom\") pod \"cinder-api-0\" (UID: \"c5a3f592-54c6-44f1-9f09-f366502287a6\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.173958 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/869a701a-040e-44ea-98cc-53eb5f33c933-scripts\") pod \"cinder-api-2\" (UID: \"869a701a-040e-44ea-98cc-53eb5f33c933\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.173984 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk7jf\" (UniqueName: \"kubernetes.io/projected/84becd41-a7aa-4b36-a6a7-8516c2e64909-kube-api-access-xk7jf\") pod \"cinder-api-1\" (UID: \"84becd41-a7aa-4b36-a6a7-8516c2e64909\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.174022 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p55ww\" (UniqueName: \"kubernetes.io/projected/c5a3f592-54c6-44f1-9f09-f366502287a6-kube-api-access-p55ww\") pod \"cinder-api-0\" (UID: \"c5a3f592-54c6-44f1-9f09-f366502287a6\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.174042 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/869a701a-040e-44ea-98cc-53eb5f33c933-config-data\") pod \"cinder-api-2\" (UID: \"869a701a-040e-44ea-98cc-53eb5f33c933\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.174062 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84becd41-a7aa-4b36-a6a7-8516c2e64909-config-data\") pod \"cinder-api-1\" (UID: \"84becd41-a7aa-4b36-a6a7-8516c2e64909\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.174078 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5a3f592-54c6-44f1-9f09-f366502287a6-scripts\") pod \"cinder-api-0\" (UID: \"c5a3f592-54c6-44f1-9f09-f366502287a6\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.174100 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/869a701a-040e-44ea-98cc-53eb5f33c933-config-data-custom\") pod \"cinder-api-2\" (UID: \"869a701a-040e-44ea-98cc-53eb5f33c933\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.174134 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84becd41-a7aa-4b36-a6a7-8516c2e64909-etc-machine-id\") pod \"cinder-api-1\" (UID: \"84becd41-a7aa-4b36-a6a7-8516c2e64909\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.174161 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5a3f592-54c6-44f1-9f09-f366502287a6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c5a3f592-54c6-44f1-9f09-f366502287a6\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.174179 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84becd41-a7aa-4b36-a6a7-8516c2e64909-scripts\") pod \"cinder-api-1\" (UID: \"84becd41-a7aa-4b36-a6a7-8516c2e64909\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.174508 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/869a701a-040e-44ea-98cc-53eb5f33c933-etc-machine-id\") pod \"cinder-api-2\" (UID: \"869a701a-040e-44ea-98cc-53eb5f33c933\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.175450 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84becd41-a7aa-4b36-a6a7-8516c2e64909-logs\") pod \"cinder-api-1\" (UID: \"84becd41-a7aa-4b36-a6a7-8516c2e64909\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.176511 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5a3f592-54c6-44f1-9f09-f366502287a6-logs\") pod \"cinder-api-0\" (UID: \"c5a3f592-54c6-44f1-9f09-f366502287a6\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.177312 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/869a701a-040e-44ea-98cc-53eb5f33c933-logs\") pod \"cinder-api-2\" (UID: \"869a701a-040e-44ea-98cc-53eb5f33c933\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.180303 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84becd41-a7aa-4b36-a6a7-8516c2e64909-etc-machine-id\") pod \"cinder-api-1\" (UID: \"84becd41-a7aa-4b36-a6a7-8516c2e64909\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.181385 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/869a701a-040e-44ea-98cc-53eb5f33c933-scripts\") pod \"cinder-api-2\" (UID: \"869a701a-040e-44ea-98cc-53eb5f33c933\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.181463 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5a3f592-54c6-44f1-9f09-f366502287a6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c5a3f592-54c6-44f1-9f09-f366502287a6\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.187362 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5a3f592-54c6-44f1-9f09-f366502287a6-config-data-custom\") pod \"cinder-api-0\" (UID: \"c5a3f592-54c6-44f1-9f09-f366502287a6\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.188406 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84becd41-a7aa-4b36-a6a7-8516c2e64909-scripts\") pod \"cinder-api-1\" (UID: \"84becd41-a7aa-4b36-a6a7-8516c2e64909\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.188374 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84becd41-a7aa-4b36-a6a7-8516c2e64909-config-data\") pod \"cinder-api-1\" (UID: \"84becd41-a7aa-4b36-a6a7-8516c2e64909\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.188562 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/869a701a-040e-44ea-98cc-53eb5f33c933-config-data-custom\") pod \"cinder-api-2\" (UID: \"869a701a-040e-44ea-98cc-53eb5f33c933\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.188680 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84becd41-a7aa-4b36-a6a7-8516c2e64909-config-data-custom\") pod \"cinder-api-1\" (UID: \"84becd41-a7aa-4b36-a6a7-8516c2e64909\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.189351 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a3f592-54c6-44f1-9f09-f366502287a6-config-data\") pod \"cinder-api-0\" (UID: \"c5a3f592-54c6-44f1-9f09-f366502287a6\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.190513 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/869a701a-040e-44ea-98cc-53eb5f33c933-config-data\") pod \"cinder-api-2\" (UID: \"869a701a-040e-44ea-98cc-53eb5f33c933\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.194068 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5a3f592-54c6-44f1-9f09-f366502287a6-scripts\") pod \"cinder-api-0\" (UID: \"c5a3f592-54c6-44f1-9f09-f366502287a6\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.194090 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zvnq\" (UniqueName: \"kubernetes.io/projected/869a701a-040e-44ea-98cc-53eb5f33c933-kube-api-access-4zvnq\") pod \"cinder-api-2\" (UID: \"869a701a-040e-44ea-98cc-53eb5f33c933\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.202049 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk7jf\" (UniqueName: \"kubernetes.io/projected/84becd41-a7aa-4b36-a6a7-8516c2e64909-kube-api-access-xk7jf\") pod \"cinder-api-1\" (UID: \"84becd41-a7aa-4b36-a6a7-8516c2e64909\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.202650 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p55ww\" (UniqueName: \"kubernetes.io/projected/c5a3f592-54c6-44f1-9f09-f366502287a6-kube-api-access-p55ww\") pod \"cinder-api-0\" (UID: \"c5a3f592-54c6-44f1-9f09-f366502287a6\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.276376 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.287259 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-2" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.296000 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-1" Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.733708 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 29 16:29:39 crc kubenswrapper[4714]: W0129 16:29:39.741675 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5a3f592_54c6_44f1_9f09_f366502287a6.slice/crio-fcf337a11251f351330506928dad191d61b9373e215ea14ec246e54e5e9a3034 WatchSource:0}: Error finding container fcf337a11251f351330506928dad191d61b9373e215ea14ec246e54e5e9a3034: Status 404 returned error can't find the container with id fcf337a11251f351330506928dad191d61b9373e215ea14ec246e54e5e9a3034 Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.796839 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-2"] Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.812336 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-1"] Jan 29 16:29:39 crc kubenswrapper[4714]: W0129 16:29:39.820782 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84becd41_a7aa_4b36_a6a7_8516c2e64909.slice/crio-7f9cbec8d1d3b10670a11a02aa0992c33a6443ac29d99038a22ff077c3971712 WatchSource:0}: Error finding container 7f9cbec8d1d3b10670a11a02aa0992c33a6443ac29d99038a22ff077c3971712: Status 404 returned error can't find the container with id 7f9cbec8d1d3b10670a11a02aa0992c33a6443ac29d99038a22ff077c3971712 Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.877134 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-2" event={"ID":"869a701a-040e-44ea-98cc-53eb5f33c933","Type":"ContainerStarted","Data":"2504822ce0a99b2d0cc7db311f85a30c6e6f161cbc200d841e855f4f3eb1ff5b"} Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.879173 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-1" event={"ID":"84becd41-a7aa-4b36-a6a7-8516c2e64909","Type":"ContainerStarted","Data":"7f9cbec8d1d3b10670a11a02aa0992c33a6443ac29d99038a22ff077c3971712"} Jan 29 16:29:39 crc kubenswrapper[4714]: I0129 16:29:39.881843 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"c5a3f592-54c6-44f1-9f09-f366502287a6","Type":"ContainerStarted","Data":"fcf337a11251f351330506928dad191d61b9373e215ea14ec246e54e5e9a3034"} Jan 29 16:29:40 crc kubenswrapper[4714]: I0129 16:29:40.891643 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-2" event={"ID":"869a701a-040e-44ea-98cc-53eb5f33c933","Type":"ContainerStarted","Data":"fe14e97e807f7bfc78569e0b132f84ed0512e6e44920a45bdbbf02eab4f1fb0d"} Jan 29 16:29:40 crc kubenswrapper[4714]: I0129 16:29:40.894290 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-1" event={"ID":"84becd41-a7aa-4b36-a6a7-8516c2e64909","Type":"ContainerStarted","Data":"df6868e9d7ac6b0ebfef8c6a4c95fca607fa88217512148f0e9154c25064c11e"} Jan 29 16:29:40 crc kubenswrapper[4714]: I0129 16:29:40.898156 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"c5a3f592-54c6-44f1-9f09-f366502287a6","Type":"ContainerStarted","Data":"32d98c19e9cd977d60b8a6256b941e6d21b55abe95ffb10609d846a79c267c86"} Jan 29 16:29:41 crc kubenswrapper[4714]: I0129 16:29:41.184891 4714 scope.go:117] "RemoveContainer" containerID="bbd0c612c943b6ec94ed1610e6f355545ecaf388e165bbb67cb2c301b434e969" Jan 29 16:29:41 crc kubenswrapper[4714]: I0129 16:29:41.185488 4714 scope.go:117] "RemoveContainer" containerID="06b2f42073c400aedcdcd2bb5ec2c469776d6fe3bfbe8ce136fb5ad93196a673" Jan 29 16:29:41 crc kubenswrapper[4714]: E0129 16:29:41.186055 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(2e154d80-4b79-4f74-809e-c1c274ed4063)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 40s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(2e154d80-4b79-4f74-809e-c1c274ed4063)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" Jan 29 16:29:41 crc kubenswrapper[4714]: I0129 16:29:41.914189 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-2" event={"ID":"869a701a-040e-44ea-98cc-53eb5f33c933","Type":"ContainerStarted","Data":"ee0d2c74772fa7181a1d5113e251172e052dd7202e52d6c038b2533ca027dac1"} Jan 29 16:29:41 crc kubenswrapper[4714]: I0129 16:29:41.914360 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/cinder-api-2" Jan 29 16:29:41 crc kubenswrapper[4714]: I0129 16:29:41.918021 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-1" event={"ID":"84becd41-a7aa-4b36-a6a7-8516c2e64909","Type":"ContainerStarted","Data":"59d26536c5d18af3cf0b3379093a52042043c35adcb85fba86b4fe5fa5b56cb8"} Jan 29 16:29:41 crc kubenswrapper[4714]: I0129 16:29:41.918232 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/cinder-api-1" Jan 29 16:29:41 crc kubenswrapper[4714]: I0129 16:29:41.922533 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"c5a3f592-54c6-44f1-9f09-f366502287a6","Type":"ContainerStarted","Data":"922b00d2d16cf09fabe14b15b7a7648c66244d5a615d55c77b2cc333c2095cf3"} Jan 29 16:29:41 crc kubenswrapper[4714]: I0129 16:29:41.923148 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:29:41 crc kubenswrapper[4714]: I0129 16:29:41.943612 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-api-2" podStartSLOduration=3.943591689 podStartE2EDuration="3.943591689s" podCreationTimestamp="2026-01-29 16:29:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:29:41.938226617 +0000 UTC m=+1188.458727777" watchObservedRunningTime="2026-01-29 16:29:41.943591689 +0000 UTC m=+1188.464092819" Jan 29 16:29:41 crc kubenswrapper[4714]: I0129 16:29:41.974980 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-api-0" podStartSLOduration=3.974908069 podStartE2EDuration="3.974908069s" podCreationTimestamp="2026-01-29 16:29:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:29:41.967041803 +0000 UTC m=+1188.487542943" watchObservedRunningTime="2026-01-29 16:29:41.974908069 +0000 UTC m=+1188.495409239" Jan 29 16:29:42 crc kubenswrapper[4714]: I0129 16:29:41.999615 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-api-1" podStartSLOduration=3.999586308 podStartE2EDuration="3.999586308s" podCreationTimestamp="2026-01-29 16:29:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:29:41.991071709 +0000 UTC m=+1188.511572889" watchObservedRunningTime="2026-01-29 16:29:41.999586308 +0000 UTC m=+1188.520087438" Jan 29 16:29:48 crc kubenswrapper[4714]: E0129 16:29:48.196785 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:29:51 crc kubenswrapper[4714]: I0129 16:29:51.080495 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:29:51 crc kubenswrapper[4714]: I0129 16:29:51.092381 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/cinder-api-2" Jan 29 16:29:51 crc kubenswrapper[4714]: I0129 16:29:51.137606 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/cinder-api-1" Jan 29 16:29:52 crc kubenswrapper[4714]: I0129 16:29:52.185063 4714 scope.go:117] "RemoveContainer" containerID="bbd0c612c943b6ec94ed1610e6f355545ecaf388e165bbb67cb2c301b434e969" Jan 29 16:29:52 crc kubenswrapper[4714]: I0129 16:29:52.185119 4714 scope.go:117] "RemoveContainer" containerID="06b2f42073c400aedcdcd2bb5ec2c469776d6fe3bfbe8ce136fb5ad93196a673" Jan 29 16:29:52 crc kubenswrapper[4714]: E0129 16:29:52.185521 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(2e154d80-4b79-4f74-809e-c1c274ed4063)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 40s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(2e154d80-4b79-4f74-809e-c1c274ed4063)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" Jan 29 16:29:52 crc kubenswrapper[4714]: I0129 16:29:52.409571 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-2"] Jan 29 16:29:52 crc kubenswrapper[4714]: I0129 16:29:52.410445 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-2" podUID="869a701a-040e-44ea-98cc-53eb5f33c933" containerName="cinder-api-log" containerID="cri-o://fe14e97e807f7bfc78569e0b132f84ed0512e6e44920a45bdbbf02eab4f1fb0d" gracePeriod=30 Jan 29 16:29:52 crc kubenswrapper[4714]: I0129 16:29:52.410582 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-2" podUID="869a701a-040e-44ea-98cc-53eb5f33c933" containerName="cinder-api" containerID="cri-o://ee0d2c74772fa7181a1d5113e251172e052dd7202e52d6c038b2533ca027dac1" gracePeriod=30 Jan 29 16:29:52 crc kubenswrapper[4714]: I0129 16:29:52.418586 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-1"] Jan 29 16:29:52 crc kubenswrapper[4714]: I0129 16:29:52.418996 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-1" podUID="84becd41-a7aa-4b36-a6a7-8516c2e64909" containerName="cinder-api-log" containerID="cri-o://df6868e9d7ac6b0ebfef8c6a4c95fca607fa88217512148f0e9154c25064c11e" gracePeriod=30 Jan 29 16:29:52 crc kubenswrapper[4714]: I0129 16:29:52.419118 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-1" podUID="84becd41-a7aa-4b36-a6a7-8516c2e64909" containerName="cinder-api" containerID="cri-o://59d26536c5d18af3cf0b3379093a52042043c35adcb85fba86b4fe5fa5b56cb8" gracePeriod=30 Jan 29 16:29:52 crc kubenswrapper[4714]: I0129 16:29:52.425379 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="cinder-kuttl-tests/cinder-api-2" podUID="869a701a-040e-44ea-98cc-53eb5f33c933" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.100:8776/healthcheck\": EOF" Jan 29 16:29:52 crc kubenswrapper[4714]: I0129 16:29:52.430754 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="cinder-kuttl-tests/cinder-api-1" podUID="84becd41-a7aa-4b36-a6a7-8516c2e64909" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.99:8776/healthcheck\": EOF" Jan 29 16:29:53 crc kubenswrapper[4714]: I0129 16:29:53.039512 4714 generic.go:334] "Generic (PLEG): container finished" podID="84becd41-a7aa-4b36-a6a7-8516c2e64909" containerID="df6868e9d7ac6b0ebfef8c6a4c95fca607fa88217512148f0e9154c25064c11e" exitCode=143 Jan 29 16:29:53 crc kubenswrapper[4714]: I0129 16:29:53.039588 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-1" event={"ID":"84becd41-a7aa-4b36-a6a7-8516c2e64909","Type":"ContainerDied","Data":"df6868e9d7ac6b0ebfef8c6a4c95fca607fa88217512148f0e9154c25064c11e"} Jan 29 16:29:53 crc kubenswrapper[4714]: I0129 16:29:53.043531 4714 generic.go:334] "Generic (PLEG): container finished" podID="869a701a-040e-44ea-98cc-53eb5f33c933" containerID="fe14e97e807f7bfc78569e0b132f84ed0512e6e44920a45bdbbf02eab4f1fb0d" exitCode=143 Jan 29 16:29:53 crc kubenswrapper[4714]: I0129 16:29:53.043570 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-2" event={"ID":"869a701a-040e-44ea-98cc-53eb5f33c933","Type":"ContainerDied","Data":"fe14e97e807f7bfc78569e0b132f84ed0512e6e44920a45bdbbf02eab4f1fb0d"} Jan 29 16:29:56 crc kubenswrapper[4714]: I0129 16:29:56.856313 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="cinder-kuttl-tests/cinder-api-2" podUID="869a701a-040e-44ea-98cc-53eb5f33c933" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.100:8776/healthcheck\": read tcp 10.217.0.2:57768->10.217.0.100:8776: read: connection reset by peer" Jan 29 16:29:56 crc kubenswrapper[4714]: I0129 16:29:56.863983 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="cinder-kuttl-tests/cinder-api-1" podUID="84becd41-a7aa-4b36-a6a7-8516c2e64909" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.99:8776/healthcheck\": read tcp 10.217.0.2:57046->10.217.0.99:8776: read: connection reset by peer" Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.080021 4714 generic.go:334] "Generic (PLEG): container finished" podID="869a701a-040e-44ea-98cc-53eb5f33c933" containerID="ee0d2c74772fa7181a1d5113e251172e052dd7202e52d6c038b2533ca027dac1" exitCode=0 Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.080203 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-2" event={"ID":"869a701a-040e-44ea-98cc-53eb5f33c933","Type":"ContainerDied","Data":"ee0d2c74772fa7181a1d5113e251172e052dd7202e52d6c038b2533ca027dac1"} Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.082734 4714 generic.go:334] "Generic (PLEG): container finished" podID="84becd41-a7aa-4b36-a6a7-8516c2e64909" containerID="59d26536c5d18af3cf0b3379093a52042043c35adcb85fba86b4fe5fa5b56cb8" exitCode=0 Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.082764 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-1" event={"ID":"84becd41-a7aa-4b36-a6a7-8516c2e64909","Type":"ContainerDied","Data":"59d26536c5d18af3cf0b3379093a52042043c35adcb85fba86b4fe5fa5b56cb8"} Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.275993 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-2" Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.281644 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-1" Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.398482 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84becd41-a7aa-4b36-a6a7-8516c2e64909-config-data\") pod \"84becd41-a7aa-4b36-a6a7-8516c2e64909\" (UID: \"84becd41-a7aa-4b36-a6a7-8516c2e64909\") " Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.398536 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/869a701a-040e-44ea-98cc-53eb5f33c933-config-data\") pod \"869a701a-040e-44ea-98cc-53eb5f33c933\" (UID: \"869a701a-040e-44ea-98cc-53eb5f33c933\") " Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.398567 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk7jf\" (UniqueName: \"kubernetes.io/projected/84becd41-a7aa-4b36-a6a7-8516c2e64909-kube-api-access-xk7jf\") pod \"84becd41-a7aa-4b36-a6a7-8516c2e64909\" (UID: \"84becd41-a7aa-4b36-a6a7-8516c2e64909\") " Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.398614 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/869a701a-040e-44ea-98cc-53eb5f33c933-etc-machine-id\") pod \"869a701a-040e-44ea-98cc-53eb5f33c933\" (UID: \"869a701a-040e-44ea-98cc-53eb5f33c933\") " Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.398650 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zvnq\" (UniqueName: \"kubernetes.io/projected/869a701a-040e-44ea-98cc-53eb5f33c933-kube-api-access-4zvnq\") pod \"869a701a-040e-44ea-98cc-53eb5f33c933\" (UID: \"869a701a-040e-44ea-98cc-53eb5f33c933\") " Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.398673 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84becd41-a7aa-4b36-a6a7-8516c2e64909-logs\") pod \"84becd41-a7aa-4b36-a6a7-8516c2e64909\" (UID: \"84becd41-a7aa-4b36-a6a7-8516c2e64909\") " Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.398689 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/869a701a-040e-44ea-98cc-53eb5f33c933-logs\") pod \"869a701a-040e-44ea-98cc-53eb5f33c933\" (UID: \"869a701a-040e-44ea-98cc-53eb5f33c933\") " Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.398748 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/869a701a-040e-44ea-98cc-53eb5f33c933-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "869a701a-040e-44ea-98cc-53eb5f33c933" (UID: "869a701a-040e-44ea-98cc-53eb5f33c933"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.398793 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84becd41-a7aa-4b36-a6a7-8516c2e64909-scripts\") pod \"84becd41-a7aa-4b36-a6a7-8516c2e64909\" (UID: \"84becd41-a7aa-4b36-a6a7-8516c2e64909\") " Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.398870 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/869a701a-040e-44ea-98cc-53eb5f33c933-scripts\") pod \"869a701a-040e-44ea-98cc-53eb5f33c933\" (UID: \"869a701a-040e-44ea-98cc-53eb5f33c933\") " Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.399303 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84becd41-a7aa-4b36-a6a7-8516c2e64909-etc-machine-id\") pod \"84becd41-a7aa-4b36-a6a7-8516c2e64909\" (UID: \"84becd41-a7aa-4b36-a6a7-8516c2e64909\") " Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.399336 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/869a701a-040e-44ea-98cc-53eb5f33c933-logs" (OuterVolumeSpecName: "logs") pod "869a701a-040e-44ea-98cc-53eb5f33c933" (UID: "869a701a-040e-44ea-98cc-53eb5f33c933"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.399346 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/869a701a-040e-44ea-98cc-53eb5f33c933-config-data-custom\") pod \"869a701a-040e-44ea-98cc-53eb5f33c933\" (UID: \"869a701a-040e-44ea-98cc-53eb5f33c933\") " Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.399332 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84becd41-a7aa-4b36-a6a7-8516c2e64909-logs" (OuterVolumeSpecName: "logs") pod "84becd41-a7aa-4b36-a6a7-8516c2e64909" (UID: "84becd41-a7aa-4b36-a6a7-8516c2e64909"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.399367 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84becd41-a7aa-4b36-a6a7-8516c2e64909-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "84becd41-a7aa-4b36-a6a7-8516c2e64909" (UID: "84becd41-a7aa-4b36-a6a7-8516c2e64909"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.399390 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84becd41-a7aa-4b36-a6a7-8516c2e64909-config-data-custom\") pod \"84becd41-a7aa-4b36-a6a7-8516c2e64909\" (UID: \"84becd41-a7aa-4b36-a6a7-8516c2e64909\") " Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.399692 4714 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84becd41-a7aa-4b36-a6a7-8516c2e64909-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.399707 4714 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/869a701a-040e-44ea-98cc-53eb5f33c933-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.399717 4714 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84becd41-a7aa-4b36-a6a7-8516c2e64909-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.399727 4714 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/869a701a-040e-44ea-98cc-53eb5f33c933-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.408300 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/869a701a-040e-44ea-98cc-53eb5f33c933-scripts" (OuterVolumeSpecName: "scripts") pod "869a701a-040e-44ea-98cc-53eb5f33c933" (UID: "869a701a-040e-44ea-98cc-53eb5f33c933"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.408374 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84becd41-a7aa-4b36-a6a7-8516c2e64909-scripts" (OuterVolumeSpecName: "scripts") pod "84becd41-a7aa-4b36-a6a7-8516c2e64909" (UID: "84becd41-a7aa-4b36-a6a7-8516c2e64909"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.409876 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/869a701a-040e-44ea-98cc-53eb5f33c933-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "869a701a-040e-44ea-98cc-53eb5f33c933" (UID: "869a701a-040e-44ea-98cc-53eb5f33c933"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.414239 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869a701a-040e-44ea-98cc-53eb5f33c933-kube-api-access-4zvnq" (OuterVolumeSpecName: "kube-api-access-4zvnq") pod "869a701a-040e-44ea-98cc-53eb5f33c933" (UID: "869a701a-040e-44ea-98cc-53eb5f33c933"). InnerVolumeSpecName "kube-api-access-4zvnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.415496 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84becd41-a7aa-4b36-a6a7-8516c2e64909-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "84becd41-a7aa-4b36-a6a7-8516c2e64909" (UID: "84becd41-a7aa-4b36-a6a7-8516c2e64909"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.418032 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84becd41-a7aa-4b36-a6a7-8516c2e64909-kube-api-access-xk7jf" (OuterVolumeSpecName: "kube-api-access-xk7jf") pod "84becd41-a7aa-4b36-a6a7-8516c2e64909" (UID: "84becd41-a7aa-4b36-a6a7-8516c2e64909"). InnerVolumeSpecName "kube-api-access-xk7jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.443932 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/869a701a-040e-44ea-98cc-53eb5f33c933-config-data" (OuterVolumeSpecName: "config-data") pod "869a701a-040e-44ea-98cc-53eb5f33c933" (UID: "869a701a-040e-44ea-98cc-53eb5f33c933"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.463234 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84becd41-a7aa-4b36-a6a7-8516c2e64909-config-data" (OuterVolumeSpecName: "config-data") pod "84becd41-a7aa-4b36-a6a7-8516c2e64909" (UID: "84becd41-a7aa-4b36-a6a7-8516c2e64909"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.500925 4714 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/869a701a-040e-44ea-98cc-53eb5f33c933-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.501195 4714 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/869a701a-040e-44ea-98cc-53eb5f33c933-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.501279 4714 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84becd41-a7aa-4b36-a6a7-8516c2e64909-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.501359 4714 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84becd41-a7aa-4b36-a6a7-8516c2e64909-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.501435 4714 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/869a701a-040e-44ea-98cc-53eb5f33c933-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.501503 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk7jf\" (UniqueName: \"kubernetes.io/projected/84becd41-a7aa-4b36-a6a7-8516c2e64909-kube-api-access-xk7jf\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.501582 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zvnq\" (UniqueName: \"kubernetes.io/projected/869a701a-040e-44ea-98cc-53eb5f33c933-kube-api-access-4zvnq\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.501653 4714 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84becd41-a7aa-4b36-a6a7-8516c2e64909-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.844172 4714 patch_prober.go:28] interesting pod/machine-config-daemon-ppngk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:29:57 crc kubenswrapper[4714]: I0129 16:29:57.844663 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.090768 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-2" event={"ID":"869a701a-040e-44ea-98cc-53eb5f33c933","Type":"ContainerDied","Data":"2504822ce0a99b2d0cc7db311f85a30c6e6f161cbc200d841e855f4f3eb1ff5b"} Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.090814 4714 scope.go:117] "RemoveContainer" containerID="ee0d2c74772fa7181a1d5113e251172e052dd7202e52d6c038b2533ca027dac1" Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.090877 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-2" Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.094097 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-1" event={"ID":"84becd41-a7aa-4b36-a6a7-8516c2e64909","Type":"ContainerDied","Data":"7f9cbec8d1d3b10670a11a02aa0992c33a6443ac29d99038a22ff077c3971712"} Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.094151 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-1" Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.109457 4714 scope.go:117] "RemoveContainer" containerID="fe14e97e807f7bfc78569e0b132f84ed0512e6e44920a45bdbbf02eab4f1fb0d" Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.133534 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-1"] Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.133901 4714 scope.go:117] "RemoveContainer" containerID="59d26536c5d18af3cf0b3379093a52042043c35adcb85fba86b4fe5fa5b56cb8" Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.149190 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-api-1"] Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.156621 4714 scope.go:117] "RemoveContainer" containerID="df6868e9d7ac6b0ebfef8c6a4c95fca607fa88217512148f0e9154c25064c11e" Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.156930 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-2"] Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.164593 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-api-2"] Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.191870 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84becd41-a7aa-4b36-a6a7-8516c2e64909" path="/var/lib/kubelet/pods/84becd41-a7aa-4b36-a6a7-8516c2e64909/volumes" Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.192613 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869a701a-040e-44ea-98cc-53eb5f33c933" path="/var/lib/kubelet/pods/869a701a-040e-44ea-98cc-53eb5f33c933/volumes" Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.740091 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-9pvrg"] Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.752515 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-9pvrg"] Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.759682 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.759916 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-0" podUID="c0390b29-ac12-4c76-a954-8c7236d81661" containerName="cinder-scheduler" containerID="cri-o://8b1dd6490cae9482eb69b08b1702b33a571bfe8525bbdf572a2f8f5feb7f4e29" gracePeriod=30 Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.760031 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-0" podUID="c0390b29-ac12-4c76-a954-8c7236d81661" containerName="probe" containerID="cri-o://a3a659f2d53d2c5e83b9c4601491f3c732c7a7d0fe867de849d8aab7c8e4a69c" gracePeriod=30 Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.773197 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.773492 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-0" podUID="e01c83c7-65ba-4f1b-9d17-ba5a824216bb" containerName="cinder-backup" containerID="cri-o://f457144c29e9469b286400011d4bf2e3e4f7a3f73d20a9a7750c293dc9d6911b" gracePeriod=30 Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.773637 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-0" podUID="e01c83c7-65ba-4f1b-9d17-ba5a824216bb" containerName="probe" containerID="cri-o://e1bfab0c04a18d8e21e126e8983546eaa8cd1254da1312d48182dae10e5944bc" gracePeriod=30 Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.804013 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.848449 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.848716 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-0" podUID="c5a3f592-54c6-44f1-9f09-f366502287a6" containerName="cinder-api-log" containerID="cri-o://32d98c19e9cd977d60b8a6256b941e6d21b55abe95ffb10609d846a79c267c86" gracePeriod=30 Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.849149 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-0" podUID="c5a3f592-54c6-44f1-9f09-f366502287a6" containerName="cinder-api" containerID="cri-o://922b00d2d16cf09fabe14b15b7a7648c66244d5a615d55c77b2cc333c2095cf3" gracePeriod=30 Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.855243 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder9de3-account-delete-6gpgf"] Jan 29 16:29:58 crc kubenswrapper[4714]: E0129 16:29:58.855868 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869a701a-040e-44ea-98cc-53eb5f33c933" containerName="cinder-api" Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.855888 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="869a701a-040e-44ea-98cc-53eb5f33c933" containerName="cinder-api" Jan 29 16:29:58 crc kubenswrapper[4714]: E0129 16:29:58.855909 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84becd41-a7aa-4b36-a6a7-8516c2e64909" containerName="cinder-api" Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.855917 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="84becd41-a7aa-4b36-a6a7-8516c2e64909" containerName="cinder-api" Jan 29 16:29:58 crc kubenswrapper[4714]: E0129 16:29:58.855931 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84becd41-a7aa-4b36-a6a7-8516c2e64909" containerName="cinder-api-log" Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.856015 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="84becd41-a7aa-4b36-a6a7-8516c2e64909" containerName="cinder-api-log" Jan 29 16:29:58 crc kubenswrapper[4714]: E0129 16:29:58.856032 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869a701a-040e-44ea-98cc-53eb5f33c933" containerName="cinder-api-log" Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.856039 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="869a701a-040e-44ea-98cc-53eb5f33c933" containerName="cinder-api-log" Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.856199 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="84becd41-a7aa-4b36-a6a7-8516c2e64909" containerName="cinder-api-log" Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.856221 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="84becd41-a7aa-4b36-a6a7-8516c2e64909" containerName="cinder-api" Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.856232 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="869a701a-040e-44ea-98cc-53eb5f33c933" containerName="cinder-api" Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.856245 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="869a701a-040e-44ea-98cc-53eb5f33c933" containerName="cinder-api-log" Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.856769 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder9de3-account-delete-6gpgf" Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.871119 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder9de3-account-delete-6gpgf"] Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.927130 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d989387-6924-48df-a1d4-1c63911dd476-operator-scripts\") pod \"cinder9de3-account-delete-6gpgf\" (UID: \"8d989387-6924-48df-a1d4-1c63911dd476\") " pod="cinder-kuttl-tests/cinder9de3-account-delete-6gpgf" Jan 29 16:29:58 crc kubenswrapper[4714]: I0129 16:29:58.927181 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftsmf\" (UniqueName: \"kubernetes.io/projected/8d989387-6924-48df-a1d4-1c63911dd476-kube-api-access-ftsmf\") pod \"cinder9de3-account-delete-6gpgf\" (UID: \"8d989387-6924-48df-a1d4-1c63911dd476\") " pod="cinder-kuttl-tests/cinder9de3-account-delete-6gpgf" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.029031 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d989387-6924-48df-a1d4-1c63911dd476-operator-scripts\") pod \"cinder9de3-account-delete-6gpgf\" (UID: \"8d989387-6924-48df-a1d4-1c63911dd476\") " pod="cinder-kuttl-tests/cinder9de3-account-delete-6gpgf" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.029095 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftsmf\" (UniqueName: \"kubernetes.io/projected/8d989387-6924-48df-a1d4-1c63911dd476-kube-api-access-ftsmf\") pod \"cinder9de3-account-delete-6gpgf\" (UID: \"8d989387-6924-48df-a1d4-1c63911dd476\") " pod="cinder-kuttl-tests/cinder9de3-account-delete-6gpgf" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.029890 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d989387-6924-48df-a1d4-1c63911dd476-operator-scripts\") pod \"cinder9de3-account-delete-6gpgf\" (UID: \"8d989387-6924-48df-a1d4-1c63911dd476\") " pod="cinder-kuttl-tests/cinder9de3-account-delete-6gpgf" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.055686 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftsmf\" (UniqueName: \"kubernetes.io/projected/8d989387-6924-48df-a1d4-1c63911dd476-kube-api-access-ftsmf\") pod \"cinder9de3-account-delete-6gpgf\" (UID: \"8d989387-6924-48df-a1d4-1c63911dd476\") " pod="cinder-kuttl-tests/cinder9de3-account-delete-6gpgf" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.102183 4714 generic.go:334] "Generic (PLEG): container finished" podID="c5a3f592-54c6-44f1-9f09-f366502287a6" containerID="32d98c19e9cd977d60b8a6256b941e6d21b55abe95ffb10609d846a79c267c86" exitCode=143 Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.102231 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"c5a3f592-54c6-44f1-9f09-f366502287a6","Type":"ContainerDied","Data":"32d98c19e9cd977d60b8a6256b941e6d21b55abe95ffb10609d846a79c267c86"} Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.104291 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"2e154d80-4b79-4f74-809e-c1c274ed4063","Type":"ContainerDied","Data":"aa9a0a1ce04cf5e776943ebe4b4ffc887cc7f0d7111f44e2e55f143d9edbcb9b"} Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.104322 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa9a0a1ce04cf5e776943ebe4b4ffc887cc7f0d7111f44e2e55f143d9edbcb9b" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.106772 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.177766 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder9de3-account-delete-6gpgf" Jan 29 16:29:59 crc kubenswrapper[4714]: E0129 16:29:59.188328 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.231240 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-var-locks-cinder\") pod \"2e154d80-4b79-4f74-809e-c1c274ed4063\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.231303 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-etc-machine-id\") pod \"2e154d80-4b79-4f74-809e-c1c274ed4063\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.231347 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e154d80-4b79-4f74-809e-c1c274ed4063-config-data\") pod \"2e154d80-4b79-4f74-809e-c1c274ed4063\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.231376 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-dev\") pod \"2e154d80-4b79-4f74-809e-c1c274ed4063\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.231389 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "2e154d80-4b79-4f74-809e-c1c274ed4063" (UID: "2e154d80-4b79-4f74-809e-c1c274ed4063"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.231420 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-etc-nvme\") pod \"2e154d80-4b79-4f74-809e-c1c274ed4063\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.231444 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2e154d80-4b79-4f74-809e-c1c274ed4063" (UID: "2e154d80-4b79-4f74-809e-c1c274ed4063"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.231463 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-sys\") pod \"2e154d80-4b79-4f74-809e-c1c274ed4063\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.231465 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-dev" (OuterVolumeSpecName: "dev") pod "2e154d80-4b79-4f74-809e-c1c274ed4063" (UID: "2e154d80-4b79-4f74-809e-c1c274ed4063"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.231486 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-sys" (OuterVolumeSpecName: "sys") pod "2e154d80-4b79-4f74-809e-c1c274ed4063" (UID: "2e154d80-4b79-4f74-809e-c1c274ed4063"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.231488 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "2e154d80-4b79-4f74-809e-c1c274ed4063" (UID: "2e154d80-4b79-4f74-809e-c1c274ed4063"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.231501 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-lib-modules\") pod \"2e154d80-4b79-4f74-809e-c1c274ed4063\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.231564 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2e154d80-4b79-4f74-809e-c1c274ed4063" (UID: "2e154d80-4b79-4f74-809e-c1c274ed4063"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.231593 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-var-locks-brick\") pod \"2e154d80-4b79-4f74-809e-c1c274ed4063\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.231617 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e154d80-4b79-4f74-809e-c1c274ed4063-scripts\") pod \"2e154d80-4b79-4f74-809e-c1c274ed4063\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.231620 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "2e154d80-4b79-4f74-809e-c1c274ed4063" (UID: "2e154d80-4b79-4f74-809e-c1c274ed4063"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.231661 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbcbm\" (UniqueName: \"kubernetes.io/projected/2e154d80-4b79-4f74-809e-c1c274ed4063-kube-api-access-dbcbm\") pod \"2e154d80-4b79-4f74-809e-c1c274ed4063\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.231709 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-var-lib-cinder\") pod \"2e154d80-4b79-4f74-809e-c1c274ed4063\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.231730 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e154d80-4b79-4f74-809e-c1c274ed4063-config-data-custom\") pod \"2e154d80-4b79-4f74-809e-c1c274ed4063\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.231794 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-etc-iscsi\") pod \"2e154d80-4b79-4f74-809e-c1c274ed4063\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.231816 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-run\") pod \"2e154d80-4b79-4f74-809e-c1c274ed4063\" (UID: \"2e154d80-4b79-4f74-809e-c1c274ed4063\") " Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.231833 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "2e154d80-4b79-4f74-809e-c1c274ed4063" (UID: "2e154d80-4b79-4f74-809e-c1c274ed4063"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.232241 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "2e154d80-4b79-4f74-809e-c1c274ed4063" (UID: "2e154d80-4b79-4f74-809e-c1c274ed4063"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.232272 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-run" (OuterVolumeSpecName: "run") pod "2e154d80-4b79-4f74-809e-c1c274ed4063" (UID: "2e154d80-4b79-4f74-809e-c1c274ed4063"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.232395 4714 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.232410 4714 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.232420 4714 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-run\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.232432 4714 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.232446 4714 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.232458 4714 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-dev\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.232469 4714 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.232480 4714 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-sys\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.232490 4714 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.232500 4714 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2e154d80-4b79-4f74-809e-c1c274ed4063-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.238431 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e154d80-4b79-4f74-809e-c1c274ed4063-scripts" (OuterVolumeSpecName: "scripts") pod "2e154d80-4b79-4f74-809e-c1c274ed4063" (UID: "2e154d80-4b79-4f74-809e-c1c274ed4063"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.239243 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e154d80-4b79-4f74-809e-c1c274ed4063-kube-api-access-dbcbm" (OuterVolumeSpecName: "kube-api-access-dbcbm") pod "2e154d80-4b79-4f74-809e-c1c274ed4063" (UID: "2e154d80-4b79-4f74-809e-c1c274ed4063"). InnerVolumeSpecName "kube-api-access-dbcbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.240621 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e154d80-4b79-4f74-809e-c1c274ed4063-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2e154d80-4b79-4f74-809e-c1c274ed4063" (UID: "2e154d80-4b79-4f74-809e-c1c274ed4063"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.309496 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e154d80-4b79-4f74-809e-c1c274ed4063-config-data" (OuterVolumeSpecName: "config-data") pod "2e154d80-4b79-4f74-809e-c1c274ed4063" (UID: "2e154d80-4b79-4f74-809e-c1c274ed4063"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.336574 4714 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e154d80-4b79-4f74-809e-c1c274ed4063-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.336600 4714 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e154d80-4b79-4f74-809e-c1c274ed4063-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.336612 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbcbm\" (UniqueName: \"kubernetes.io/projected/2e154d80-4b79-4f74-809e-c1c274ed4063-kube-api-access-dbcbm\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.336653 4714 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e154d80-4b79-4f74-809e-c1c274ed4063-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:59 crc kubenswrapper[4714]: I0129 16:29:59.592548 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder9de3-account-delete-6gpgf"] Jan 29 16:29:59 crc kubenswrapper[4714]: W0129 16:29:59.596698 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d989387_6924_48df_a1d4_1c63911dd476.slice/crio-16c8423dc7a88554a04643282ed7d3e92befa98e3c24c9a8e1f726e72cddc134 WatchSource:0}: Error finding container 16c8423dc7a88554a04643282ed7d3e92befa98e3c24c9a8e1f726e72cddc134: Status 404 returned error can't find the container with id 16c8423dc7a88554a04643282ed7d3e92befa98e3c24c9a8e1f726e72cddc134 Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.114313 4714 generic.go:334] "Generic (PLEG): container finished" podID="c0390b29-ac12-4c76-a954-8c7236d81661" containerID="a3a659f2d53d2c5e83b9c4601491f3c732c7a7d0fe867de849d8aab7c8e4a69c" exitCode=0 Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.114379 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"c0390b29-ac12-4c76-a954-8c7236d81661","Type":"ContainerDied","Data":"a3a659f2d53d2c5e83b9c4601491f3c732c7a7d0fe867de849d8aab7c8e4a69c"} Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.116376 4714 generic.go:334] "Generic (PLEG): container finished" podID="e01c83c7-65ba-4f1b-9d17-ba5a824216bb" containerID="e1bfab0c04a18d8e21e126e8983546eaa8cd1254da1312d48182dae10e5944bc" exitCode=0 Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.116425 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"e01c83c7-65ba-4f1b-9d17-ba5a824216bb","Type":"ContainerDied","Data":"e1bfab0c04a18d8e21e126e8983546eaa8cd1254da1312d48182dae10e5944bc"} Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.117968 4714 generic.go:334] "Generic (PLEG): container finished" podID="8d989387-6924-48df-a1d4-1c63911dd476" containerID="b17d360d1529ce324f10ea0628f0cee292edf28a50e5d75cc7f2606e49a8da6e" exitCode=0 Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.118046 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.118140 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder9de3-account-delete-6gpgf" event={"ID":"8d989387-6924-48df-a1d4-1c63911dd476","Type":"ContainerDied","Data":"b17d360d1529ce324f10ea0628f0cee292edf28a50e5d75cc7f2606e49a8da6e"} Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.118184 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder9de3-account-delete-6gpgf" event={"ID":"8d989387-6924-48df-a1d4-1c63911dd476","Type":"ContainerStarted","Data":"16c8423dc7a88554a04643282ed7d3e92befa98e3c24c9a8e1f726e72cddc134"} Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.152362 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495070-b47jt"] Jan 29 16:30:00 crc kubenswrapper[4714]: E0129 16:30:00.152666 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerName="probe" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.152680 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerName="probe" Jan 29 16:30:00 crc kubenswrapper[4714]: E0129 16:30:00.152688 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerName="cinder-volume" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.152694 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerName="cinder-volume" Jan 29 16:30:00 crc kubenswrapper[4714]: E0129 16:30:00.152703 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerName="cinder-volume" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.152708 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerName="cinder-volume" Jan 29 16:30:00 crc kubenswrapper[4714]: E0129 16:30:00.152730 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerName="cinder-volume" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.152738 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerName="cinder-volume" Jan 29 16:30:00 crc kubenswrapper[4714]: E0129 16:30:00.152748 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerName="probe" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.152754 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerName="probe" Jan 29 16:30:00 crc kubenswrapper[4714]: E0129 16:30:00.152763 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerName="cinder-volume" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.152768 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerName="cinder-volume" Jan 29 16:30:00 crc kubenswrapper[4714]: E0129 16:30:00.152779 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerName="probe" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.152785 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerName="probe" Jan 29 16:30:00 crc kubenswrapper[4714]: E0129 16:30:00.152796 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerName="probe" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.152802 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerName="probe" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.152927 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerName="probe" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.152951 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerName="probe" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.152962 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerName="cinder-volume" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.152969 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerName="cinder-volume" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.152977 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerName="probe" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.152986 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerName="cinder-volume" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.153423 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-b47jt" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.160330 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.160353 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.167145 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495070-b47jt"] Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.196771 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f09789c2-52ed-4321-95f8-02c3b3f271e3" path="/var/lib/kubelet/pods/f09789c2-52ed-4321-95f8-02c3b3f271e3/volumes" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.201321 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.205543 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.253854 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76e30b28-5660-4c1a-a31b-626cc3bb6c38-config-volume\") pod \"collect-profiles-29495070-b47jt\" (UID: \"76e30b28-5660-4c1a-a31b-626cc3bb6c38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-b47jt" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.253909 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76e30b28-5660-4c1a-a31b-626cc3bb6c38-secret-volume\") pod \"collect-profiles-29495070-b47jt\" (UID: \"76e30b28-5660-4c1a-a31b-626cc3bb6c38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-b47jt" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.253979 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqqvq\" (UniqueName: \"kubernetes.io/projected/76e30b28-5660-4c1a-a31b-626cc3bb6c38-kube-api-access-nqqvq\") pod \"collect-profiles-29495070-b47jt\" (UID: \"76e30b28-5660-4c1a-a31b-626cc3bb6c38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-b47jt" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.355029 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76e30b28-5660-4c1a-a31b-626cc3bb6c38-config-volume\") pod \"collect-profiles-29495070-b47jt\" (UID: \"76e30b28-5660-4c1a-a31b-626cc3bb6c38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-b47jt" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.355313 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76e30b28-5660-4c1a-a31b-626cc3bb6c38-secret-volume\") pod \"collect-profiles-29495070-b47jt\" (UID: \"76e30b28-5660-4c1a-a31b-626cc3bb6c38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-b47jt" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.355346 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqqvq\" (UniqueName: \"kubernetes.io/projected/76e30b28-5660-4c1a-a31b-626cc3bb6c38-kube-api-access-nqqvq\") pod \"collect-profiles-29495070-b47jt\" (UID: \"76e30b28-5660-4c1a-a31b-626cc3bb6c38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-b47jt" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.356437 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76e30b28-5660-4c1a-a31b-626cc3bb6c38-config-volume\") pod \"collect-profiles-29495070-b47jt\" (UID: \"76e30b28-5660-4c1a-a31b-626cc3bb6c38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-b47jt" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.361973 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76e30b28-5660-4c1a-a31b-626cc3bb6c38-secret-volume\") pod \"collect-profiles-29495070-b47jt\" (UID: \"76e30b28-5660-4c1a-a31b-626cc3bb6c38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-b47jt" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.373288 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqqvq\" (UniqueName: \"kubernetes.io/projected/76e30b28-5660-4c1a-a31b-626cc3bb6c38-kube-api-access-nqqvq\") pod \"collect-profiles-29495070-b47jt\" (UID: \"76e30b28-5660-4c1a-a31b-626cc3bb6c38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-b47jt" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.505784 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-b47jt" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.629230 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.725340 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495070-b47jt"] Jan 29 16:30:00 crc kubenswrapper[4714]: W0129 16:30:00.728076 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76e30b28_5660_4c1a_a31b_626cc3bb6c38.slice/crio-fcb9acc88fce040f1e751e565b0e7e67527c868c890f2f7bba33c01e544174b1 WatchSource:0}: Error finding container fcb9acc88fce040f1e751e565b0e7e67527c868c890f2f7bba33c01e544174b1: Status 404 returned error can't find the container with id fcb9acc88fce040f1e751e565b0e7e67527c868c890f2f7bba33c01e544174b1 Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.761259 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-var-lib-cinder\") pod \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.761325 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "e01c83c7-65ba-4f1b-9d17-ba5a824216bb" (UID: "e01c83c7-65ba-4f1b-9d17-ba5a824216bb"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.761338 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-lib-modules\") pod \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.761365 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e01c83c7-65ba-4f1b-9d17-ba5a824216bb" (UID: "e01c83c7-65ba-4f1b-9d17-ba5a824216bb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.761388 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-var-locks-brick\") pod \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.761432 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-scripts\") pod \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.761464 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "e01c83c7-65ba-4f1b-9d17-ba5a824216bb" (UID: "e01c83c7-65ba-4f1b-9d17-ba5a824216bb"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.761471 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5dd5\" (UniqueName: \"kubernetes.io/projected/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-kube-api-access-s5dd5\") pod \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.761500 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-config-data\") pod \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.761525 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-config-data-custom\") pod \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.761557 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-etc-nvme\") pod \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.761577 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-etc-iscsi\") pod \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.761670 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-run\") pod \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.761691 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-var-locks-cinder\") pod \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.761661 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "e01c83c7-65ba-4f1b-9d17-ba5a824216bb" (UID: "e01c83c7-65ba-4f1b-9d17-ba5a824216bb"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.761695 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "e01c83c7-65ba-4f1b-9d17-ba5a824216bb" (UID: "e01c83c7-65ba-4f1b-9d17-ba5a824216bb"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.761719 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-run" (OuterVolumeSpecName: "run") pod "e01c83c7-65ba-4f1b-9d17-ba5a824216bb" (UID: "e01c83c7-65ba-4f1b-9d17-ba5a824216bb"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.761780 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-dev\") pod \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.761812 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "e01c83c7-65ba-4f1b-9d17-ba5a824216bb" (UID: "e01c83c7-65ba-4f1b-9d17-ba5a824216bb"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.761834 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-sys\") pod \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.761849 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-dev" (OuterVolumeSpecName: "dev") pod "e01c83c7-65ba-4f1b-9d17-ba5a824216bb" (UID: "e01c83c7-65ba-4f1b-9d17-ba5a824216bb"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.761906 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-etc-machine-id\") pod \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\" (UID: \"e01c83c7-65ba-4f1b-9d17-ba5a824216bb\") " Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.761957 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-sys" (OuterVolumeSpecName: "sys") pod "e01c83c7-65ba-4f1b-9d17-ba5a824216bb" (UID: "e01c83c7-65ba-4f1b-9d17-ba5a824216bb"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.762037 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e01c83c7-65ba-4f1b-9d17-ba5a824216bb" (UID: "e01c83c7-65ba-4f1b-9d17-ba5a824216bb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.762302 4714 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.762326 4714 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.762340 4714 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.762354 4714 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-run\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.762364 4714 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-dev\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.762375 4714 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-sys\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.762387 4714 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.762399 4714 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.762411 4714 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.762421 4714 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.764622 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-kube-api-access-s5dd5" (OuterVolumeSpecName: "kube-api-access-s5dd5") pod "e01c83c7-65ba-4f1b-9d17-ba5a824216bb" (UID: "e01c83c7-65ba-4f1b-9d17-ba5a824216bb"). InnerVolumeSpecName "kube-api-access-s5dd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.765130 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-scripts" (OuterVolumeSpecName: "scripts") pod "e01c83c7-65ba-4f1b-9d17-ba5a824216bb" (UID: "e01c83c7-65ba-4f1b-9d17-ba5a824216bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.765326 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e01c83c7-65ba-4f1b-9d17-ba5a824216bb" (UID: "e01c83c7-65ba-4f1b-9d17-ba5a824216bb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.837392 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-config-data" (OuterVolumeSpecName: "config-data") pod "e01c83c7-65ba-4f1b-9d17-ba5a824216bb" (UID: "e01c83c7-65ba-4f1b-9d17-ba5a824216bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.864485 4714 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.864522 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5dd5\" (UniqueName: \"kubernetes.io/projected/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-kube-api-access-s5dd5\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.864535 4714 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:00 crc kubenswrapper[4714]: I0129 16:30:00.864544 4714 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e01c83c7-65ba-4f1b-9d17-ba5a824216bb-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.125380 4714 generic.go:334] "Generic (PLEG): container finished" podID="76e30b28-5660-4c1a-a31b-626cc3bb6c38" containerID="877ac8183fb77e185f572df964edcba4a95cca50385422071d4d1af26ee35620" exitCode=0 Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.125454 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-b47jt" event={"ID":"76e30b28-5660-4c1a-a31b-626cc3bb6c38","Type":"ContainerDied","Data":"877ac8183fb77e185f572df964edcba4a95cca50385422071d4d1af26ee35620"} Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.125720 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-b47jt" event={"ID":"76e30b28-5660-4c1a-a31b-626cc3bb6c38","Type":"ContainerStarted","Data":"fcb9acc88fce040f1e751e565b0e7e67527c868c890f2f7bba33c01e544174b1"} Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.132137 4714 generic.go:334] "Generic (PLEG): container finished" podID="e01c83c7-65ba-4f1b-9d17-ba5a824216bb" containerID="f457144c29e9469b286400011d4bf2e3e4f7a3f73d20a9a7750c293dc9d6911b" exitCode=0 Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.132460 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"e01c83c7-65ba-4f1b-9d17-ba5a824216bb","Type":"ContainerDied","Data":"f457144c29e9469b286400011d4bf2e3e4f7a3f73d20a9a7750c293dc9d6911b"} Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.132551 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"e01c83c7-65ba-4f1b-9d17-ba5a824216bb","Type":"ContainerDied","Data":"198a3df4efafc7e3421f9f093f38267c1562e37ed9290df232bcf8b82972d9a2"} Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.132578 4714 scope.go:117] "RemoveContainer" containerID="e1bfab0c04a18d8e21e126e8983546eaa8cd1254da1312d48182dae10e5944bc" Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.132478 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.166447 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.169302 4714 scope.go:117] "RemoveContainer" containerID="f457144c29e9469b286400011d4bf2e3e4f7a3f73d20a9a7750c293dc9d6911b" Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.171191 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.187494 4714 scope.go:117] "RemoveContainer" containerID="e1bfab0c04a18d8e21e126e8983546eaa8cd1254da1312d48182dae10e5944bc" Jan 29 16:30:01 crc kubenswrapper[4714]: E0129 16:30:01.187900 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1bfab0c04a18d8e21e126e8983546eaa8cd1254da1312d48182dae10e5944bc\": container with ID starting with e1bfab0c04a18d8e21e126e8983546eaa8cd1254da1312d48182dae10e5944bc not found: ID does not exist" containerID="e1bfab0c04a18d8e21e126e8983546eaa8cd1254da1312d48182dae10e5944bc" Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.187942 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1bfab0c04a18d8e21e126e8983546eaa8cd1254da1312d48182dae10e5944bc"} err="failed to get container status \"e1bfab0c04a18d8e21e126e8983546eaa8cd1254da1312d48182dae10e5944bc\": rpc error: code = NotFound desc = could not find container \"e1bfab0c04a18d8e21e126e8983546eaa8cd1254da1312d48182dae10e5944bc\": container with ID starting with e1bfab0c04a18d8e21e126e8983546eaa8cd1254da1312d48182dae10e5944bc not found: ID does not exist" Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.187965 4714 scope.go:117] "RemoveContainer" containerID="f457144c29e9469b286400011d4bf2e3e4f7a3f73d20a9a7750c293dc9d6911b" Jan 29 16:30:01 crc kubenswrapper[4714]: E0129 16:30:01.188294 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f457144c29e9469b286400011d4bf2e3e4f7a3f73d20a9a7750c293dc9d6911b\": container with ID starting with f457144c29e9469b286400011d4bf2e3e4f7a3f73d20a9a7750c293dc9d6911b not found: ID does not exist" containerID="f457144c29e9469b286400011d4bf2e3e4f7a3f73d20a9a7750c293dc9d6911b" Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.188324 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f457144c29e9469b286400011d4bf2e3e4f7a3f73d20a9a7750c293dc9d6911b"} err="failed to get container status \"f457144c29e9469b286400011d4bf2e3e4f7a3f73d20a9a7750c293dc9d6911b\": rpc error: code = NotFound desc = could not find container \"f457144c29e9469b286400011d4bf2e3e4f7a3f73d20a9a7750c293dc9d6911b\": container with ID starting with f457144c29e9469b286400011d4bf2e3e4f7a3f73d20a9a7750c293dc9d6911b not found: ID does not exist" Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.417523 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder9de3-account-delete-6gpgf" Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.474098 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftsmf\" (UniqueName: \"kubernetes.io/projected/8d989387-6924-48df-a1d4-1c63911dd476-kube-api-access-ftsmf\") pod \"8d989387-6924-48df-a1d4-1c63911dd476\" (UID: \"8d989387-6924-48df-a1d4-1c63911dd476\") " Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.474189 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d989387-6924-48df-a1d4-1c63911dd476-operator-scripts\") pod \"8d989387-6924-48df-a1d4-1c63911dd476\" (UID: \"8d989387-6924-48df-a1d4-1c63911dd476\") " Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.474869 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d989387-6924-48df-a1d4-1c63911dd476-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8d989387-6924-48df-a1d4-1c63911dd476" (UID: "8d989387-6924-48df-a1d4-1c63911dd476"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.496065 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d989387-6924-48df-a1d4-1c63911dd476-kube-api-access-ftsmf" (OuterVolumeSpecName: "kube-api-access-ftsmf") pod "8d989387-6924-48df-a1d4-1c63911dd476" (UID: "8d989387-6924-48df-a1d4-1c63911dd476"). InnerVolumeSpecName "kube-api-access-ftsmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.576073 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftsmf\" (UniqueName: \"kubernetes.io/projected/8d989387-6924-48df-a1d4-1c63911dd476-kube-api-access-ftsmf\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.576110 4714 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d989387-6924-48df-a1d4-1c63911dd476-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.651487 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.777847 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rxqx\" (UniqueName: \"kubernetes.io/projected/c0390b29-ac12-4c76-a954-8c7236d81661-kube-api-access-6rxqx\") pod \"c0390b29-ac12-4c76-a954-8c7236d81661\" (UID: \"c0390b29-ac12-4c76-a954-8c7236d81661\") " Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.777900 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0390b29-ac12-4c76-a954-8c7236d81661-config-data\") pod \"c0390b29-ac12-4c76-a954-8c7236d81661\" (UID: \"c0390b29-ac12-4c76-a954-8c7236d81661\") " Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.778026 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0390b29-ac12-4c76-a954-8c7236d81661-config-data-custom\") pod \"c0390b29-ac12-4c76-a954-8c7236d81661\" (UID: \"c0390b29-ac12-4c76-a954-8c7236d81661\") " Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.778081 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0390b29-ac12-4c76-a954-8c7236d81661-scripts\") pod \"c0390b29-ac12-4c76-a954-8c7236d81661\" (UID: \"c0390b29-ac12-4c76-a954-8c7236d81661\") " Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.778114 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0390b29-ac12-4c76-a954-8c7236d81661-etc-machine-id\") pod \"c0390b29-ac12-4c76-a954-8c7236d81661\" (UID: \"c0390b29-ac12-4c76-a954-8c7236d81661\") " Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.778351 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0390b29-ac12-4c76-a954-8c7236d81661-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c0390b29-ac12-4c76-a954-8c7236d81661" (UID: "c0390b29-ac12-4c76-a954-8c7236d81661"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.780828 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0390b29-ac12-4c76-a954-8c7236d81661-kube-api-access-6rxqx" (OuterVolumeSpecName: "kube-api-access-6rxqx") pod "c0390b29-ac12-4c76-a954-8c7236d81661" (UID: "c0390b29-ac12-4c76-a954-8c7236d81661"). InnerVolumeSpecName "kube-api-access-6rxqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.781154 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0390b29-ac12-4c76-a954-8c7236d81661-scripts" (OuterVolumeSpecName: "scripts") pod "c0390b29-ac12-4c76-a954-8c7236d81661" (UID: "c0390b29-ac12-4c76-a954-8c7236d81661"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.782163 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0390b29-ac12-4c76-a954-8c7236d81661-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c0390b29-ac12-4c76-a954-8c7236d81661" (UID: "c0390b29-ac12-4c76-a954-8c7236d81661"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.849063 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0390b29-ac12-4c76-a954-8c7236d81661-config-data" (OuterVolumeSpecName: "config-data") pod "c0390b29-ac12-4c76-a954-8c7236d81661" (UID: "c0390b29-ac12-4c76-a954-8c7236d81661"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.879140 4714 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0390b29-ac12-4c76-a954-8c7236d81661-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.879194 4714 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0390b29-ac12-4c76-a954-8c7236d81661-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.879208 4714 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0390b29-ac12-4c76-a954-8c7236d81661-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.879221 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rxqx\" (UniqueName: \"kubernetes.io/projected/c0390b29-ac12-4c76-a954-8c7236d81661-kube-api-access-6rxqx\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:01 crc kubenswrapper[4714]: I0129 16:30:01.879235 4714 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0390b29-ac12-4c76-a954-8c7236d81661-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.022184 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="cinder-kuttl-tests/cinder-api-0" podUID="c5a3f592-54c6-44f1-9f09-f366502287a6" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.98:8776/healthcheck\": read tcp 10.217.0.2:33032->10.217.0.98:8776: read: connection reset by peer" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.149579 4714 generic.go:334] "Generic (PLEG): container finished" podID="c5a3f592-54c6-44f1-9f09-f366502287a6" containerID="922b00d2d16cf09fabe14b15b7a7648c66244d5a615d55c77b2cc333c2095cf3" exitCode=0 Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.149684 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"c5a3f592-54c6-44f1-9f09-f366502287a6","Type":"ContainerDied","Data":"922b00d2d16cf09fabe14b15b7a7648c66244d5a615d55c77b2cc333c2095cf3"} Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.152333 4714 generic.go:334] "Generic (PLEG): container finished" podID="c0390b29-ac12-4c76-a954-8c7236d81661" containerID="8b1dd6490cae9482eb69b08b1702b33a571bfe8525bbdf572a2f8f5feb7f4e29" exitCode=0 Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.152398 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"c0390b29-ac12-4c76-a954-8c7236d81661","Type":"ContainerDied","Data":"8b1dd6490cae9482eb69b08b1702b33a571bfe8525bbdf572a2f8f5feb7f4e29"} Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.152397 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.152426 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"c0390b29-ac12-4c76-a954-8c7236d81661","Type":"ContainerDied","Data":"a7e3257c239df083499879bab78d2e91169e0bd8d92a8c5fac288924f3619908"} Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.152450 4714 scope.go:117] "RemoveContainer" containerID="a3a659f2d53d2c5e83b9c4601491f3c732c7a7d0fe867de849d8aab7c8e4a69c" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.156250 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder9de3-account-delete-6gpgf" event={"ID":"8d989387-6924-48df-a1d4-1c63911dd476","Type":"ContainerDied","Data":"16c8423dc7a88554a04643282ed7d3e92befa98e3c24c9a8e1f726e72cddc134"} Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.156286 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16c8423dc7a88554a04643282ed7d3e92befa98e3c24c9a8e1f726e72cddc134" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.156392 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder9de3-account-delete-6gpgf" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.210703 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" path="/var/lib/kubelet/pods/2e154d80-4b79-4f74-809e-c1c274ed4063/volumes" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.211473 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e01c83c7-65ba-4f1b-9d17-ba5a824216bb" path="/var/lib/kubelet/pods/e01c83c7-65ba-4f1b-9d17-ba5a824216bb/volumes" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.212057 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.212088 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.214920 4714 scope.go:117] "RemoveContainer" containerID="8b1dd6490cae9482eb69b08b1702b33a571bfe8525bbdf572a2f8f5feb7f4e29" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.236159 4714 scope.go:117] "RemoveContainer" containerID="a3a659f2d53d2c5e83b9c4601491f3c732c7a7d0fe867de849d8aab7c8e4a69c" Jan 29 16:30:02 crc kubenswrapper[4714]: E0129 16:30:02.238330 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3a659f2d53d2c5e83b9c4601491f3c732c7a7d0fe867de849d8aab7c8e4a69c\": container with ID starting with a3a659f2d53d2c5e83b9c4601491f3c732c7a7d0fe867de849d8aab7c8e4a69c not found: ID does not exist" containerID="a3a659f2d53d2c5e83b9c4601491f3c732c7a7d0fe867de849d8aab7c8e4a69c" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.238358 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3a659f2d53d2c5e83b9c4601491f3c732c7a7d0fe867de849d8aab7c8e4a69c"} err="failed to get container status \"a3a659f2d53d2c5e83b9c4601491f3c732c7a7d0fe867de849d8aab7c8e4a69c\": rpc error: code = NotFound desc = could not find container \"a3a659f2d53d2c5e83b9c4601491f3c732c7a7d0fe867de849d8aab7c8e4a69c\": container with ID starting with a3a659f2d53d2c5e83b9c4601491f3c732c7a7d0fe867de849d8aab7c8e4a69c not found: ID does not exist" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.238379 4714 scope.go:117] "RemoveContainer" containerID="8b1dd6490cae9482eb69b08b1702b33a571bfe8525bbdf572a2f8f5feb7f4e29" Jan 29 16:30:02 crc kubenswrapper[4714]: E0129 16:30:02.249238 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b1dd6490cae9482eb69b08b1702b33a571bfe8525bbdf572a2f8f5feb7f4e29\": container with ID starting with 8b1dd6490cae9482eb69b08b1702b33a571bfe8525bbdf572a2f8f5feb7f4e29 not found: ID does not exist" containerID="8b1dd6490cae9482eb69b08b1702b33a571bfe8525bbdf572a2f8f5feb7f4e29" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.249274 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b1dd6490cae9482eb69b08b1702b33a571bfe8525bbdf572a2f8f5feb7f4e29"} err="failed to get container status \"8b1dd6490cae9482eb69b08b1702b33a571bfe8525bbdf572a2f8f5feb7f4e29\": rpc error: code = NotFound desc = could not find container \"8b1dd6490cae9482eb69b08b1702b33a571bfe8525bbdf572a2f8f5feb7f4e29\": container with ID starting with 8b1dd6490cae9482eb69b08b1702b33a571bfe8525bbdf572a2f8f5feb7f4e29 not found: ID does not exist" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.404018 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.407782 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-b47jt" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.491547 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p55ww\" (UniqueName: \"kubernetes.io/projected/c5a3f592-54c6-44f1-9f09-f366502287a6-kube-api-access-p55ww\") pod \"c5a3f592-54c6-44f1-9f09-f366502287a6\" (UID: \"c5a3f592-54c6-44f1-9f09-f366502287a6\") " Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.491597 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqqvq\" (UniqueName: \"kubernetes.io/projected/76e30b28-5660-4c1a-a31b-626cc3bb6c38-kube-api-access-nqqvq\") pod \"76e30b28-5660-4c1a-a31b-626cc3bb6c38\" (UID: \"76e30b28-5660-4c1a-a31b-626cc3bb6c38\") " Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.491623 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76e30b28-5660-4c1a-a31b-626cc3bb6c38-config-volume\") pod \"76e30b28-5660-4c1a-a31b-626cc3bb6c38\" (UID: \"76e30b28-5660-4c1a-a31b-626cc3bb6c38\") " Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.491709 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5a3f592-54c6-44f1-9f09-f366502287a6-etc-machine-id\") pod \"c5a3f592-54c6-44f1-9f09-f366502287a6\" (UID: \"c5a3f592-54c6-44f1-9f09-f366502287a6\") " Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.491741 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5a3f592-54c6-44f1-9f09-f366502287a6-config-data-custom\") pod \"c5a3f592-54c6-44f1-9f09-f366502287a6\" (UID: \"c5a3f592-54c6-44f1-9f09-f366502287a6\") " Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.491758 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a3f592-54c6-44f1-9f09-f366502287a6-config-data\") pod \"c5a3f592-54c6-44f1-9f09-f366502287a6\" (UID: \"c5a3f592-54c6-44f1-9f09-f366502287a6\") " Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.491794 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5a3f592-54c6-44f1-9f09-f366502287a6-logs\") pod \"c5a3f592-54c6-44f1-9f09-f366502287a6\" (UID: \"c5a3f592-54c6-44f1-9f09-f366502287a6\") " Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.491819 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76e30b28-5660-4c1a-a31b-626cc3bb6c38-secret-volume\") pod \"76e30b28-5660-4c1a-a31b-626cc3bb6c38\" (UID: \"76e30b28-5660-4c1a-a31b-626cc3bb6c38\") " Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.491847 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5a3f592-54c6-44f1-9f09-f366502287a6-scripts\") pod \"c5a3f592-54c6-44f1-9f09-f366502287a6\" (UID: \"c5a3f592-54c6-44f1-9f09-f366502287a6\") " Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.492549 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5a3f592-54c6-44f1-9f09-f366502287a6-logs" (OuterVolumeSpecName: "logs") pod "c5a3f592-54c6-44f1-9f09-f366502287a6" (UID: "c5a3f592-54c6-44f1-9f09-f366502287a6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.493142 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e30b28-5660-4c1a-a31b-626cc3bb6c38-config-volume" (OuterVolumeSpecName: "config-volume") pod "76e30b28-5660-4c1a-a31b-626cc3bb6c38" (UID: "76e30b28-5660-4c1a-a31b-626cc3bb6c38"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.493203 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5a3f592-54c6-44f1-9f09-f366502287a6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c5a3f592-54c6-44f1-9f09-f366502287a6" (UID: "c5a3f592-54c6-44f1-9f09-f366502287a6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.495204 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a3f592-54c6-44f1-9f09-f366502287a6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c5a3f592-54c6-44f1-9f09-f366502287a6" (UID: "c5a3f592-54c6-44f1-9f09-f366502287a6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.495230 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76e30b28-5660-4c1a-a31b-626cc3bb6c38-kube-api-access-nqqvq" (OuterVolumeSpecName: "kube-api-access-nqqvq") pod "76e30b28-5660-4c1a-a31b-626cc3bb6c38" (UID: "76e30b28-5660-4c1a-a31b-626cc3bb6c38"). InnerVolumeSpecName "kube-api-access-nqqvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.495478 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76e30b28-5660-4c1a-a31b-626cc3bb6c38-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "76e30b28-5660-4c1a-a31b-626cc3bb6c38" (UID: "76e30b28-5660-4c1a-a31b-626cc3bb6c38"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.496099 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a3f592-54c6-44f1-9f09-f366502287a6-scripts" (OuterVolumeSpecName: "scripts") pod "c5a3f592-54c6-44f1-9f09-f366502287a6" (UID: "c5a3f592-54c6-44f1-9f09-f366502287a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.496898 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5a3f592-54c6-44f1-9f09-f366502287a6-kube-api-access-p55ww" (OuterVolumeSpecName: "kube-api-access-p55ww") pod "c5a3f592-54c6-44f1-9f09-f366502287a6" (UID: "c5a3f592-54c6-44f1-9f09-f366502287a6"). InnerVolumeSpecName "kube-api-access-p55ww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.528119 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a3f592-54c6-44f1-9f09-f366502287a6-config-data" (OuterVolumeSpecName: "config-data") pod "c5a3f592-54c6-44f1-9f09-f366502287a6" (UID: "c5a3f592-54c6-44f1-9f09-f366502287a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.593406 4714 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76e30b28-5660-4c1a-a31b-626cc3bb6c38-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.593442 4714 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5a3f592-54c6-44f1-9f09-f366502287a6-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.593454 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p55ww\" (UniqueName: \"kubernetes.io/projected/c5a3f592-54c6-44f1-9f09-f366502287a6-kube-api-access-p55ww\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.593468 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqqvq\" (UniqueName: \"kubernetes.io/projected/76e30b28-5660-4c1a-a31b-626cc3bb6c38-kube-api-access-nqqvq\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.593478 4714 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76e30b28-5660-4c1a-a31b-626cc3bb6c38-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.593486 4714 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5a3f592-54c6-44f1-9f09-f366502287a6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.593496 4714 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5a3f592-54c6-44f1-9f09-f366502287a6-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.593503 4714 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a3f592-54c6-44f1-9f09-f366502287a6-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:02 crc kubenswrapper[4714]: I0129 16:30:02.593511 4714 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5a3f592-54c6-44f1-9f09-f366502287a6-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.168264 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-b47jt" event={"ID":"76e30b28-5660-4c1a-a31b-626cc3bb6c38","Type":"ContainerDied","Data":"fcb9acc88fce040f1e751e565b0e7e67527c868c890f2f7bba33c01e544174b1"} Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.168324 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-b47jt" Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.168347 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcb9acc88fce040f1e751e565b0e7e67527c868c890f2f7bba33c01e544174b1" Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.170670 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"c5a3f592-54c6-44f1-9f09-f366502287a6","Type":"ContainerDied","Data":"fcf337a11251f351330506928dad191d61b9373e215ea14ec246e54e5e9a3034"} Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.170752 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.170766 4714 scope.go:117] "RemoveContainer" containerID="922b00d2d16cf09fabe14b15b7a7648c66244d5a615d55c77b2cc333c2095cf3" Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.193219 4714 scope.go:117] "RemoveContainer" containerID="32d98c19e9cd977d60b8a6256b941e6d21b55abe95ffb10609d846a79c267c86" Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.210068 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.215696 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.852199 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-64s46"] Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.865709 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-64s46"] Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.877069 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder9de3-account-delete-6gpgf"] Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.883802 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-9de3-account-create-update-79889"] Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.889461 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder9de3-account-delete-6gpgf"] Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.895619 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-9de3-account-create-update-79889"] Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.938360 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-db-create-hqbqv"] Jan 29 16:30:03 crc kubenswrapper[4714]: E0129 16:30:03.938709 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d989387-6924-48df-a1d4-1c63911dd476" containerName="mariadb-account-delete" Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.938737 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d989387-6924-48df-a1d4-1c63911dd476" containerName="mariadb-account-delete" Jan 29 16:30:03 crc kubenswrapper[4714]: E0129 16:30:03.938766 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0390b29-ac12-4c76-a954-8c7236d81661" containerName="probe" Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.938780 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0390b29-ac12-4c76-a954-8c7236d81661" containerName="probe" Jan 29 16:30:03 crc kubenswrapper[4714]: E0129 16:30:03.938798 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e01c83c7-65ba-4f1b-9d17-ba5a824216bb" containerName="probe" Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.938809 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="e01c83c7-65ba-4f1b-9d17-ba5a824216bb" containerName="probe" Jan 29 16:30:03 crc kubenswrapper[4714]: E0129 16:30:03.938825 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a3f592-54c6-44f1-9f09-f366502287a6" containerName="cinder-api-log" Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.938836 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a3f592-54c6-44f1-9f09-f366502287a6" containerName="cinder-api-log" Jan 29 16:30:03 crc kubenswrapper[4714]: E0129 16:30:03.938852 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a3f592-54c6-44f1-9f09-f366502287a6" containerName="cinder-api" Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.938862 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a3f592-54c6-44f1-9f09-f366502287a6" containerName="cinder-api" Jan 29 16:30:03 crc kubenswrapper[4714]: E0129 16:30:03.938881 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e30b28-5660-4c1a-a31b-626cc3bb6c38" containerName="collect-profiles" Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.938892 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e30b28-5660-4c1a-a31b-626cc3bb6c38" containerName="collect-profiles" Jan 29 16:30:03 crc kubenswrapper[4714]: E0129 16:30:03.938909 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e01c83c7-65ba-4f1b-9d17-ba5a824216bb" containerName="cinder-backup" Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.938919 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="e01c83c7-65ba-4f1b-9d17-ba5a824216bb" containerName="cinder-backup" Jan 29 16:30:03 crc kubenswrapper[4714]: E0129 16:30:03.938959 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0390b29-ac12-4c76-a954-8c7236d81661" containerName="cinder-scheduler" Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.938973 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0390b29-ac12-4c76-a954-8c7236d81661" containerName="cinder-scheduler" Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.939169 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerName="cinder-volume" Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.939193 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0390b29-ac12-4c76-a954-8c7236d81661" containerName="probe" Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.939213 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="e01c83c7-65ba-4f1b-9d17-ba5a824216bb" containerName="cinder-backup" Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.939228 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a3f592-54c6-44f1-9f09-f366502287a6" containerName="cinder-api" Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.939244 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d989387-6924-48df-a1d4-1c63911dd476" containerName="mariadb-account-delete" Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.939258 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a3f592-54c6-44f1-9f09-f366502287a6" containerName="cinder-api-log" Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.939272 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="76e30b28-5660-4c1a-a31b-626cc3bb6c38" containerName="collect-profiles" Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.939288 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0390b29-ac12-4c76-a954-8c7236d81661" containerName="cinder-scheduler" Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.939306 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="e01c83c7-65ba-4f1b-9d17-ba5a824216bb" containerName="probe" Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.939324 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e154d80-4b79-4f74-809e-c1c274ed4063" containerName="probe" Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.940052 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-hqbqv" Jan 29 16:30:03 crc kubenswrapper[4714]: I0129 16:30:03.944141 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-hqbqv"] Jan 29 16:30:04 crc kubenswrapper[4714]: I0129 16:30:04.037199 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4vmr\" (UniqueName: \"kubernetes.io/projected/5e9c54b1-972a-4807-90af-f94a884002bd-kube-api-access-g4vmr\") pod \"cinder-db-create-hqbqv\" (UID: \"5e9c54b1-972a-4807-90af-f94a884002bd\") " pod="cinder-kuttl-tests/cinder-db-create-hqbqv" Jan 29 16:30:04 crc kubenswrapper[4714]: I0129 16:30:04.037268 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e9c54b1-972a-4807-90af-f94a884002bd-operator-scripts\") pod \"cinder-db-create-hqbqv\" (UID: \"5e9c54b1-972a-4807-90af-f94a884002bd\") " pod="cinder-kuttl-tests/cinder-db-create-hqbqv" Jan 29 16:30:04 crc kubenswrapper[4714]: I0129 16:30:04.054762 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-6e48-account-create-update-qff9j"] Jan 29 16:30:04 crc kubenswrapper[4714]: I0129 16:30:04.055529 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-6e48-account-create-update-qff9j" Jan 29 16:30:04 crc kubenswrapper[4714]: I0129 16:30:04.057861 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-db-secret" Jan 29 16:30:04 crc kubenswrapper[4714]: I0129 16:30:04.073607 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-6e48-account-create-update-qff9j"] Jan 29 16:30:04 crc kubenswrapper[4714]: I0129 16:30:04.138813 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3306850-8843-48e1-b203-7f52de72682f-operator-scripts\") pod \"cinder-6e48-account-create-update-qff9j\" (UID: \"c3306850-8843-48e1-b203-7f52de72682f\") " pod="cinder-kuttl-tests/cinder-6e48-account-create-update-qff9j" Jan 29 16:30:04 crc kubenswrapper[4714]: I0129 16:30:04.139107 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4vmr\" (UniqueName: \"kubernetes.io/projected/5e9c54b1-972a-4807-90af-f94a884002bd-kube-api-access-g4vmr\") pod \"cinder-db-create-hqbqv\" (UID: \"5e9c54b1-972a-4807-90af-f94a884002bd\") " pod="cinder-kuttl-tests/cinder-db-create-hqbqv" Jan 29 16:30:04 crc kubenswrapper[4714]: I0129 16:30:04.139175 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pztc9\" (UniqueName: \"kubernetes.io/projected/c3306850-8843-48e1-b203-7f52de72682f-kube-api-access-pztc9\") pod \"cinder-6e48-account-create-update-qff9j\" (UID: \"c3306850-8843-48e1-b203-7f52de72682f\") " pod="cinder-kuttl-tests/cinder-6e48-account-create-update-qff9j" Jan 29 16:30:04 crc kubenswrapper[4714]: I0129 16:30:04.139237 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e9c54b1-972a-4807-90af-f94a884002bd-operator-scripts\") pod \"cinder-db-create-hqbqv\" (UID: \"5e9c54b1-972a-4807-90af-f94a884002bd\") " pod="cinder-kuttl-tests/cinder-db-create-hqbqv" Jan 29 16:30:04 crc kubenswrapper[4714]: I0129 16:30:04.140184 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e9c54b1-972a-4807-90af-f94a884002bd-operator-scripts\") pod \"cinder-db-create-hqbqv\" (UID: \"5e9c54b1-972a-4807-90af-f94a884002bd\") " pod="cinder-kuttl-tests/cinder-db-create-hqbqv" Jan 29 16:30:04 crc kubenswrapper[4714]: I0129 16:30:04.162647 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4vmr\" (UniqueName: \"kubernetes.io/projected/5e9c54b1-972a-4807-90af-f94a884002bd-kube-api-access-g4vmr\") pod \"cinder-db-create-hqbqv\" (UID: \"5e9c54b1-972a-4807-90af-f94a884002bd\") " pod="cinder-kuttl-tests/cinder-db-create-hqbqv" Jan 29 16:30:04 crc kubenswrapper[4714]: I0129 16:30:04.193417 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d989387-6924-48df-a1d4-1c63911dd476" path="/var/lib/kubelet/pods/8d989387-6924-48df-a1d4-1c63911dd476/volumes" Jan 29 16:30:04 crc kubenswrapper[4714]: I0129 16:30:04.194331 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0390b29-ac12-4c76-a954-8c7236d81661" path="/var/lib/kubelet/pods/c0390b29-ac12-4c76-a954-8c7236d81661/volumes" Jan 29 16:30:04 crc kubenswrapper[4714]: I0129 16:30:04.195135 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5a3f592-54c6-44f1-9f09-f366502287a6" path="/var/lib/kubelet/pods/c5a3f592-54c6-44f1-9f09-f366502287a6/volumes" Jan 29 16:30:04 crc kubenswrapper[4714]: I0129 16:30:04.196257 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c747693b-f2e9-4073-9432-115643a6b6d1" path="/var/lib/kubelet/pods/c747693b-f2e9-4073-9432-115643a6b6d1/volumes" Jan 29 16:30:04 crc kubenswrapper[4714]: I0129 16:30:04.196749 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e670557b-650e-478c-9f87-eaba6641f02f" path="/var/lib/kubelet/pods/e670557b-650e-478c-9f87-eaba6641f02f/volumes" Jan 29 16:30:04 crc kubenswrapper[4714]: I0129 16:30:04.240917 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pztc9\" (UniqueName: \"kubernetes.io/projected/c3306850-8843-48e1-b203-7f52de72682f-kube-api-access-pztc9\") pod \"cinder-6e48-account-create-update-qff9j\" (UID: \"c3306850-8843-48e1-b203-7f52de72682f\") " pod="cinder-kuttl-tests/cinder-6e48-account-create-update-qff9j" Jan 29 16:30:04 crc kubenswrapper[4714]: I0129 16:30:04.241047 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3306850-8843-48e1-b203-7f52de72682f-operator-scripts\") pod \"cinder-6e48-account-create-update-qff9j\" (UID: \"c3306850-8843-48e1-b203-7f52de72682f\") " pod="cinder-kuttl-tests/cinder-6e48-account-create-update-qff9j" Jan 29 16:30:04 crc kubenswrapper[4714]: I0129 16:30:04.241834 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3306850-8843-48e1-b203-7f52de72682f-operator-scripts\") pod \"cinder-6e48-account-create-update-qff9j\" (UID: \"c3306850-8843-48e1-b203-7f52de72682f\") " pod="cinder-kuttl-tests/cinder-6e48-account-create-update-qff9j" Jan 29 16:30:04 crc kubenswrapper[4714]: I0129 16:30:04.258122 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pztc9\" (UniqueName: \"kubernetes.io/projected/c3306850-8843-48e1-b203-7f52de72682f-kube-api-access-pztc9\") pod \"cinder-6e48-account-create-update-qff9j\" (UID: \"c3306850-8843-48e1-b203-7f52de72682f\") " pod="cinder-kuttl-tests/cinder-6e48-account-create-update-qff9j" Jan 29 16:30:04 crc kubenswrapper[4714]: I0129 16:30:04.264981 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-hqbqv" Jan 29 16:30:04 crc kubenswrapper[4714]: I0129 16:30:04.375492 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-6e48-account-create-update-qff9j" Jan 29 16:30:04 crc kubenswrapper[4714]: I0129 16:30:04.600898 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-6e48-account-create-update-qff9j"] Jan 29 16:30:04 crc kubenswrapper[4714]: W0129 16:30:04.604251 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3306850_8843_48e1_b203_7f52de72682f.slice/crio-802cb29550e90931d915fdf150b111a01142b90181c1330445c631bbf924410b WatchSource:0}: Error finding container 802cb29550e90931d915fdf150b111a01142b90181c1330445c631bbf924410b: Status 404 returned error can't find the container with id 802cb29550e90931d915fdf150b111a01142b90181c1330445c631bbf924410b Jan 29 16:30:04 crc kubenswrapper[4714]: I0129 16:30:04.694413 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-hqbqv"] Jan 29 16:30:04 crc kubenswrapper[4714]: W0129 16:30:04.695873 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e9c54b1_972a_4807_90af_f94a884002bd.slice/crio-cd3ed4ce144cba24bd2fd48ece0ee79d8f7dc5583431a0cba4fe696107cd50a3 WatchSource:0}: Error finding container cd3ed4ce144cba24bd2fd48ece0ee79d8f7dc5583431a0cba4fe696107cd50a3: Status 404 returned error can't find the container with id cd3ed4ce144cba24bd2fd48ece0ee79d8f7dc5583431a0cba4fe696107cd50a3 Jan 29 16:30:05 crc kubenswrapper[4714]: I0129 16:30:05.187777 4714 generic.go:334] "Generic (PLEG): container finished" podID="c3306850-8843-48e1-b203-7f52de72682f" containerID="b09b52cf99e966280d15ceb6a6529b45a9303070f260a1e02acc4b1cf0da02c3" exitCode=0 Jan 29 16:30:05 crc kubenswrapper[4714]: I0129 16:30:05.187860 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-6e48-account-create-update-qff9j" event={"ID":"c3306850-8843-48e1-b203-7f52de72682f","Type":"ContainerDied","Data":"b09b52cf99e966280d15ceb6a6529b45a9303070f260a1e02acc4b1cf0da02c3"} Jan 29 16:30:05 crc kubenswrapper[4714]: I0129 16:30:05.187892 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-6e48-account-create-update-qff9j" event={"ID":"c3306850-8843-48e1-b203-7f52de72682f","Type":"ContainerStarted","Data":"802cb29550e90931d915fdf150b111a01142b90181c1330445c631bbf924410b"} Jan 29 16:30:05 crc kubenswrapper[4714]: I0129 16:30:05.189536 4714 generic.go:334] "Generic (PLEG): container finished" podID="5e9c54b1-972a-4807-90af-f94a884002bd" containerID="ad6e6492e17aa0045196d2d7816583e0511c88fa1ba9566c638560f377a604b8" exitCode=0 Jan 29 16:30:05 crc kubenswrapper[4714]: I0129 16:30:05.189564 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-create-hqbqv" event={"ID":"5e9c54b1-972a-4807-90af-f94a884002bd","Type":"ContainerDied","Data":"ad6e6492e17aa0045196d2d7816583e0511c88fa1ba9566c638560f377a604b8"} Jan 29 16:30:05 crc kubenswrapper[4714]: I0129 16:30:05.189580 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-create-hqbqv" event={"ID":"5e9c54b1-972a-4807-90af-f94a884002bd","Type":"ContainerStarted","Data":"cd3ed4ce144cba24bd2fd48ece0ee79d8f7dc5583431a0cba4fe696107cd50a3"} Jan 29 16:30:06 crc kubenswrapper[4714]: I0129 16:30:06.518115 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-hqbqv" Jan 29 16:30:06 crc kubenswrapper[4714]: I0129 16:30:06.524019 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-6e48-account-create-update-qff9j" Jan 29 16:30:06 crc kubenswrapper[4714]: I0129 16:30:06.676879 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pztc9\" (UniqueName: \"kubernetes.io/projected/c3306850-8843-48e1-b203-7f52de72682f-kube-api-access-pztc9\") pod \"c3306850-8843-48e1-b203-7f52de72682f\" (UID: \"c3306850-8843-48e1-b203-7f52de72682f\") " Jan 29 16:30:06 crc kubenswrapper[4714]: I0129 16:30:06.677012 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4vmr\" (UniqueName: \"kubernetes.io/projected/5e9c54b1-972a-4807-90af-f94a884002bd-kube-api-access-g4vmr\") pod \"5e9c54b1-972a-4807-90af-f94a884002bd\" (UID: \"5e9c54b1-972a-4807-90af-f94a884002bd\") " Jan 29 16:30:06 crc kubenswrapper[4714]: I0129 16:30:06.677082 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3306850-8843-48e1-b203-7f52de72682f-operator-scripts\") pod \"c3306850-8843-48e1-b203-7f52de72682f\" (UID: \"c3306850-8843-48e1-b203-7f52de72682f\") " Jan 29 16:30:06 crc kubenswrapper[4714]: I0129 16:30:06.677126 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e9c54b1-972a-4807-90af-f94a884002bd-operator-scripts\") pod \"5e9c54b1-972a-4807-90af-f94a884002bd\" (UID: \"5e9c54b1-972a-4807-90af-f94a884002bd\") " Jan 29 16:30:06 crc kubenswrapper[4714]: I0129 16:30:06.678804 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3306850-8843-48e1-b203-7f52de72682f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3306850-8843-48e1-b203-7f52de72682f" (UID: "c3306850-8843-48e1-b203-7f52de72682f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:06 crc kubenswrapper[4714]: I0129 16:30:06.678873 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e9c54b1-972a-4807-90af-f94a884002bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e9c54b1-972a-4807-90af-f94a884002bd" (UID: "5e9c54b1-972a-4807-90af-f94a884002bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:06 crc kubenswrapper[4714]: I0129 16:30:06.685930 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e9c54b1-972a-4807-90af-f94a884002bd-kube-api-access-g4vmr" (OuterVolumeSpecName: "kube-api-access-g4vmr") pod "5e9c54b1-972a-4807-90af-f94a884002bd" (UID: "5e9c54b1-972a-4807-90af-f94a884002bd"). InnerVolumeSpecName "kube-api-access-g4vmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:06 crc kubenswrapper[4714]: I0129 16:30:06.686266 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3306850-8843-48e1-b203-7f52de72682f-kube-api-access-pztc9" (OuterVolumeSpecName: "kube-api-access-pztc9") pod "c3306850-8843-48e1-b203-7f52de72682f" (UID: "c3306850-8843-48e1-b203-7f52de72682f"). InnerVolumeSpecName "kube-api-access-pztc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:06 crc kubenswrapper[4714]: I0129 16:30:06.778871 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pztc9\" (UniqueName: \"kubernetes.io/projected/c3306850-8843-48e1-b203-7f52de72682f-kube-api-access-pztc9\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:06 crc kubenswrapper[4714]: I0129 16:30:06.778915 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4vmr\" (UniqueName: \"kubernetes.io/projected/5e9c54b1-972a-4807-90af-f94a884002bd-kube-api-access-g4vmr\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:06 crc kubenswrapper[4714]: I0129 16:30:06.778953 4714 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3306850-8843-48e1-b203-7f52de72682f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:06 crc kubenswrapper[4714]: I0129 16:30:06.778972 4714 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e9c54b1-972a-4807-90af-f94a884002bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:07 crc kubenswrapper[4714]: I0129 16:30:07.210436 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-create-hqbqv" event={"ID":"5e9c54b1-972a-4807-90af-f94a884002bd","Type":"ContainerDied","Data":"cd3ed4ce144cba24bd2fd48ece0ee79d8f7dc5583431a0cba4fe696107cd50a3"} Jan 29 16:30:07 crc kubenswrapper[4714]: I0129 16:30:07.210981 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd3ed4ce144cba24bd2fd48ece0ee79d8f7dc5583431a0cba4fe696107cd50a3" Jan 29 16:30:07 crc kubenswrapper[4714]: I0129 16:30:07.211101 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-hqbqv" Jan 29 16:30:07 crc kubenswrapper[4714]: I0129 16:30:07.213389 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-6e48-account-create-update-qff9j" event={"ID":"c3306850-8843-48e1-b203-7f52de72682f","Type":"ContainerDied","Data":"802cb29550e90931d915fdf150b111a01142b90181c1330445c631bbf924410b"} Jan 29 16:30:07 crc kubenswrapper[4714]: I0129 16:30:07.213445 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="802cb29550e90931d915fdf150b111a01142b90181c1330445c631bbf924410b" Jan 29 16:30:07 crc kubenswrapper[4714]: I0129 16:30:07.213467 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-6e48-account-create-update-qff9j" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.295428 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-zpbtx"] Jan 29 16:30:09 crc kubenswrapper[4714]: E0129 16:30:09.295726 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e9c54b1-972a-4807-90af-f94a884002bd" containerName="mariadb-database-create" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.295738 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e9c54b1-972a-4807-90af-f94a884002bd" containerName="mariadb-database-create" Jan 29 16:30:09 crc kubenswrapper[4714]: E0129 16:30:09.295752 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3306850-8843-48e1-b203-7f52de72682f" containerName="mariadb-account-create-update" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.295759 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3306850-8843-48e1-b203-7f52de72682f" containerName="mariadb-account-create-update" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.295884 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e9c54b1-972a-4807-90af-f94a884002bd" containerName="mariadb-database-create" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.296002 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3306850-8843-48e1-b203-7f52de72682f" containerName="mariadb-account-create-update" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.296412 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-zpbtx" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.298820 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-scripts" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.299543 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-config-data" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.299657 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-cinder-dockercfg-thgfp" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.301272 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"combined-ca-bundle" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.317906 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-zpbtx"] Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.416903 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0cabdde2-0578-405a-9147-efe4d1db7e90-etc-machine-id\") pod \"cinder-db-sync-zpbtx\" (UID: \"0cabdde2-0578-405a-9147-efe4d1db7e90\") " pod="cinder-kuttl-tests/cinder-db-sync-zpbtx" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.417521 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cabdde2-0578-405a-9147-efe4d1db7e90-combined-ca-bundle\") pod \"cinder-db-sync-zpbtx\" (UID: \"0cabdde2-0578-405a-9147-efe4d1db7e90\") " pod="cinder-kuttl-tests/cinder-db-sync-zpbtx" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.417568 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0cabdde2-0578-405a-9147-efe4d1db7e90-db-sync-config-data\") pod \"cinder-db-sync-zpbtx\" (UID: \"0cabdde2-0578-405a-9147-efe4d1db7e90\") " pod="cinder-kuttl-tests/cinder-db-sync-zpbtx" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.417783 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cabdde2-0578-405a-9147-efe4d1db7e90-scripts\") pod \"cinder-db-sync-zpbtx\" (UID: \"0cabdde2-0578-405a-9147-efe4d1db7e90\") " pod="cinder-kuttl-tests/cinder-db-sync-zpbtx" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.417904 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5p98\" (UniqueName: \"kubernetes.io/projected/0cabdde2-0578-405a-9147-efe4d1db7e90-kube-api-access-w5p98\") pod \"cinder-db-sync-zpbtx\" (UID: \"0cabdde2-0578-405a-9147-efe4d1db7e90\") " pod="cinder-kuttl-tests/cinder-db-sync-zpbtx" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.418063 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cabdde2-0578-405a-9147-efe4d1db7e90-config-data\") pod \"cinder-db-sync-zpbtx\" (UID: \"0cabdde2-0578-405a-9147-efe4d1db7e90\") " pod="cinder-kuttl-tests/cinder-db-sync-zpbtx" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.519738 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cabdde2-0578-405a-9147-efe4d1db7e90-combined-ca-bundle\") pod \"cinder-db-sync-zpbtx\" (UID: \"0cabdde2-0578-405a-9147-efe4d1db7e90\") " pod="cinder-kuttl-tests/cinder-db-sync-zpbtx" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.519802 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0cabdde2-0578-405a-9147-efe4d1db7e90-db-sync-config-data\") pod \"cinder-db-sync-zpbtx\" (UID: \"0cabdde2-0578-405a-9147-efe4d1db7e90\") " pod="cinder-kuttl-tests/cinder-db-sync-zpbtx" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.519830 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cabdde2-0578-405a-9147-efe4d1db7e90-scripts\") pod \"cinder-db-sync-zpbtx\" (UID: \"0cabdde2-0578-405a-9147-efe4d1db7e90\") " pod="cinder-kuttl-tests/cinder-db-sync-zpbtx" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.519856 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5p98\" (UniqueName: \"kubernetes.io/projected/0cabdde2-0578-405a-9147-efe4d1db7e90-kube-api-access-w5p98\") pod \"cinder-db-sync-zpbtx\" (UID: \"0cabdde2-0578-405a-9147-efe4d1db7e90\") " pod="cinder-kuttl-tests/cinder-db-sync-zpbtx" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.519890 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cabdde2-0578-405a-9147-efe4d1db7e90-config-data\") pod \"cinder-db-sync-zpbtx\" (UID: \"0cabdde2-0578-405a-9147-efe4d1db7e90\") " pod="cinder-kuttl-tests/cinder-db-sync-zpbtx" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.519945 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0cabdde2-0578-405a-9147-efe4d1db7e90-etc-machine-id\") pod \"cinder-db-sync-zpbtx\" (UID: \"0cabdde2-0578-405a-9147-efe4d1db7e90\") " pod="cinder-kuttl-tests/cinder-db-sync-zpbtx" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.520008 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0cabdde2-0578-405a-9147-efe4d1db7e90-etc-machine-id\") pod \"cinder-db-sync-zpbtx\" (UID: \"0cabdde2-0578-405a-9147-efe4d1db7e90\") " pod="cinder-kuttl-tests/cinder-db-sync-zpbtx" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.527736 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cabdde2-0578-405a-9147-efe4d1db7e90-scripts\") pod \"cinder-db-sync-zpbtx\" (UID: \"0cabdde2-0578-405a-9147-efe4d1db7e90\") " pod="cinder-kuttl-tests/cinder-db-sync-zpbtx" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.527781 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0cabdde2-0578-405a-9147-efe4d1db7e90-db-sync-config-data\") pod \"cinder-db-sync-zpbtx\" (UID: \"0cabdde2-0578-405a-9147-efe4d1db7e90\") " pod="cinder-kuttl-tests/cinder-db-sync-zpbtx" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.527870 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cabdde2-0578-405a-9147-efe4d1db7e90-combined-ca-bundle\") pod \"cinder-db-sync-zpbtx\" (UID: \"0cabdde2-0578-405a-9147-efe4d1db7e90\") " pod="cinder-kuttl-tests/cinder-db-sync-zpbtx" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.528564 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cabdde2-0578-405a-9147-efe4d1db7e90-config-data\") pod \"cinder-db-sync-zpbtx\" (UID: \"0cabdde2-0578-405a-9147-efe4d1db7e90\") " pod="cinder-kuttl-tests/cinder-db-sync-zpbtx" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.538811 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5p98\" (UniqueName: \"kubernetes.io/projected/0cabdde2-0578-405a-9147-efe4d1db7e90-kube-api-access-w5p98\") pod \"cinder-db-sync-zpbtx\" (UID: \"0cabdde2-0578-405a-9147-efe4d1db7e90\") " pod="cinder-kuttl-tests/cinder-db-sync-zpbtx" Jan 29 16:30:09 crc kubenswrapper[4714]: I0129 16:30:09.642799 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-zpbtx" Jan 29 16:30:10 crc kubenswrapper[4714]: I0129 16:30:10.065432 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-zpbtx"] Jan 29 16:30:10 crc kubenswrapper[4714]: I0129 16:30:10.240469 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-zpbtx" event={"ID":"0cabdde2-0578-405a-9147-efe4d1db7e90","Type":"ContainerStarted","Data":"546b72ac34d10addd067108b73bef9b904aea8a895447843e1314af1b2a18e00"} Jan 29 16:30:11 crc kubenswrapper[4714]: I0129 16:30:11.252363 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-zpbtx" event={"ID":"0cabdde2-0578-405a-9147-efe4d1db7e90","Type":"ContainerStarted","Data":"be7a968b80d5f3fb2bec436bd6006753f8b129fcc60e1b9be5f43a75c59f2e55"} Jan 29 16:30:11 crc kubenswrapper[4714]: I0129 16:30:11.270632 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-db-sync-zpbtx" podStartSLOduration=2.2706143770000002 podStartE2EDuration="2.270614377s" podCreationTimestamp="2026-01-29 16:30:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:11.268691637 +0000 UTC m=+1217.789192817" watchObservedRunningTime="2026-01-29 16:30:11.270614377 +0000 UTC m=+1217.791115507" Jan 29 16:30:12 crc kubenswrapper[4714]: E0129 16:30:12.188315 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:30:13 crc kubenswrapper[4714]: I0129 16:30:13.269836 4714 generic.go:334] "Generic (PLEG): container finished" podID="0cabdde2-0578-405a-9147-efe4d1db7e90" containerID="be7a968b80d5f3fb2bec436bd6006753f8b129fcc60e1b9be5f43a75c59f2e55" exitCode=0 Jan 29 16:30:13 crc kubenswrapper[4714]: I0129 16:30:13.269873 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-zpbtx" event={"ID":"0cabdde2-0578-405a-9147-efe4d1db7e90","Type":"ContainerDied","Data":"be7a968b80d5f3fb2bec436bd6006753f8b129fcc60e1b9be5f43a75c59f2e55"} Jan 29 16:30:14 crc kubenswrapper[4714]: I0129 16:30:14.623246 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-zpbtx" Jan 29 16:30:14 crc kubenswrapper[4714]: I0129 16:30:14.715515 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0cabdde2-0578-405a-9147-efe4d1db7e90-etc-machine-id\") pod \"0cabdde2-0578-405a-9147-efe4d1db7e90\" (UID: \"0cabdde2-0578-405a-9147-efe4d1db7e90\") " Jan 29 16:30:14 crc kubenswrapper[4714]: I0129 16:30:14.715637 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cabdde2-0578-405a-9147-efe4d1db7e90-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0cabdde2-0578-405a-9147-efe4d1db7e90" (UID: "0cabdde2-0578-405a-9147-efe4d1db7e90"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:14 crc kubenswrapper[4714]: I0129 16:30:14.715677 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0cabdde2-0578-405a-9147-efe4d1db7e90-db-sync-config-data\") pod \"0cabdde2-0578-405a-9147-efe4d1db7e90\" (UID: \"0cabdde2-0578-405a-9147-efe4d1db7e90\") " Jan 29 16:30:14 crc kubenswrapper[4714]: I0129 16:30:14.715834 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cabdde2-0578-405a-9147-efe4d1db7e90-combined-ca-bundle\") pod \"0cabdde2-0578-405a-9147-efe4d1db7e90\" (UID: \"0cabdde2-0578-405a-9147-efe4d1db7e90\") " Jan 29 16:30:14 crc kubenswrapper[4714]: I0129 16:30:14.715859 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5p98\" (UniqueName: \"kubernetes.io/projected/0cabdde2-0578-405a-9147-efe4d1db7e90-kube-api-access-w5p98\") pod \"0cabdde2-0578-405a-9147-efe4d1db7e90\" (UID: \"0cabdde2-0578-405a-9147-efe4d1db7e90\") " Jan 29 16:30:14 crc kubenswrapper[4714]: I0129 16:30:14.715908 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cabdde2-0578-405a-9147-efe4d1db7e90-scripts\") pod \"0cabdde2-0578-405a-9147-efe4d1db7e90\" (UID: \"0cabdde2-0578-405a-9147-efe4d1db7e90\") " Jan 29 16:30:14 crc kubenswrapper[4714]: I0129 16:30:14.715975 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cabdde2-0578-405a-9147-efe4d1db7e90-config-data\") pod \"0cabdde2-0578-405a-9147-efe4d1db7e90\" (UID: \"0cabdde2-0578-405a-9147-efe4d1db7e90\") " Jan 29 16:30:14 crc kubenswrapper[4714]: I0129 16:30:14.716384 4714 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0cabdde2-0578-405a-9147-efe4d1db7e90-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:14 crc kubenswrapper[4714]: I0129 16:30:14.720847 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cabdde2-0578-405a-9147-efe4d1db7e90-kube-api-access-w5p98" (OuterVolumeSpecName: "kube-api-access-w5p98") pod "0cabdde2-0578-405a-9147-efe4d1db7e90" (UID: "0cabdde2-0578-405a-9147-efe4d1db7e90"). InnerVolumeSpecName "kube-api-access-w5p98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:14 crc kubenswrapper[4714]: I0129 16:30:14.720860 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cabdde2-0578-405a-9147-efe4d1db7e90-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0cabdde2-0578-405a-9147-efe4d1db7e90" (UID: "0cabdde2-0578-405a-9147-efe4d1db7e90"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:14 crc kubenswrapper[4714]: I0129 16:30:14.721018 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cabdde2-0578-405a-9147-efe4d1db7e90-scripts" (OuterVolumeSpecName: "scripts") pod "0cabdde2-0578-405a-9147-efe4d1db7e90" (UID: "0cabdde2-0578-405a-9147-efe4d1db7e90"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:14 crc kubenswrapper[4714]: I0129 16:30:14.734254 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cabdde2-0578-405a-9147-efe4d1db7e90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0cabdde2-0578-405a-9147-efe4d1db7e90" (UID: "0cabdde2-0578-405a-9147-efe4d1db7e90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:14 crc kubenswrapper[4714]: I0129 16:30:14.747695 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cabdde2-0578-405a-9147-efe4d1db7e90-config-data" (OuterVolumeSpecName: "config-data") pod "0cabdde2-0578-405a-9147-efe4d1db7e90" (UID: "0cabdde2-0578-405a-9147-efe4d1db7e90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:14 crc kubenswrapper[4714]: I0129 16:30:14.817262 4714 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cabdde2-0578-405a-9147-efe4d1db7e90-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:14 crc kubenswrapper[4714]: I0129 16:30:14.817300 4714 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cabdde2-0578-405a-9147-efe4d1db7e90-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:14 crc kubenswrapper[4714]: I0129 16:30:14.817313 4714 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0cabdde2-0578-405a-9147-efe4d1db7e90-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:14 crc kubenswrapper[4714]: I0129 16:30:14.817323 4714 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cabdde2-0578-405a-9147-efe4d1db7e90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:14 crc kubenswrapper[4714]: I0129 16:30:14.817332 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5p98\" (UniqueName: \"kubernetes.io/projected/0cabdde2-0578-405a-9147-efe4d1db7e90-kube-api-access-w5p98\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.288458 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-zpbtx" event={"ID":"0cabdde2-0578-405a-9147-efe4d1db7e90","Type":"ContainerDied","Data":"546b72ac34d10addd067108b73bef9b904aea8a895447843e1314af1b2a18e00"} Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.288499 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="546b72ac34d10addd067108b73bef9b904aea8a895447843e1314af1b2a18e00" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.288557 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-zpbtx" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.539075 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 29 16:30:15 crc kubenswrapper[4714]: E0129 16:30:15.539764 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cabdde2-0578-405a-9147-efe4d1db7e90" containerName="cinder-db-sync" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.539793 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cabdde2-0578-405a-9147-efe4d1db7e90" containerName="cinder-db-sync" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.540032 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cabdde2-0578-405a-9147-efe4d1db7e90" containerName="cinder-db-sync" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.540892 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.543627 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-config-data" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.551816 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"combined-ca-bundle" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.552055 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-scripts" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.552285 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-cinder-dockercfg-thgfp" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.556272 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-scheduler-config-data" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.570670 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.581405 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.594840 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.596531 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-backup-config-data" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.625076 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.630027 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/130b46c6-e7e5-4202-bea4-1214ec4766e8-scripts\") pod \"cinder-scheduler-0\" (UID: \"130b46c6-e7e5-4202-bea4-1214ec4766e8\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.630321 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/130b46c6-e7e5-4202-bea4-1214ec4766e8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"130b46c6-e7e5-4202-bea4-1214ec4766e8\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.630462 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/130b46c6-e7e5-4202-bea4-1214ec4766e8-config-data\") pod \"cinder-scheduler-0\" (UID: \"130b46c6-e7e5-4202-bea4-1214ec4766e8\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.630605 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/130b46c6-e7e5-4202-bea4-1214ec4766e8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"130b46c6-e7e5-4202-bea4-1214ec4766e8\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.630727 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85zz9\" (UniqueName: \"kubernetes.io/projected/130b46c6-e7e5-4202-bea4-1214ec4766e8-kube-api-access-85zz9\") pod \"cinder-scheduler-0\" (UID: \"130b46c6-e7e5-4202-bea4-1214ec4766e8\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.630843 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/130b46c6-e7e5-4202-bea4-1214ec4766e8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"130b46c6-e7e5-4202-bea4-1214ec4766e8\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.647977 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.649029 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.653796 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-volume-volume1-config-data" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.663422 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732201 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732250 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27df3c21-5ecb-4af0-9e48-a40f826dc75d-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732274 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732297 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-run\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732321 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732341 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-run\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732372 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/130b46c6-e7e5-4202-bea4-1214ec4766e8-config-data\") pod \"cinder-scheduler-0\" (UID: \"130b46c6-e7e5-4202-bea4-1214ec4766e8\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732402 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732421 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-dev\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732437 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732457 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-lib-modules\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732487 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-dev\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732506 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732527 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732555 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732575 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27df3c21-5ecb-4af0-9e48-a40f826dc75d-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732594 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732617 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c2fde86-e7c8-4605-9750-8464ca4b7d58-config-data-custom\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732644 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c2fde86-e7c8-4605-9750-8464ca4b7d58-scripts\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732668 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/130b46c6-e7e5-4202-bea4-1214ec4766e8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"130b46c6-e7e5-4202-bea4-1214ec4766e8\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732695 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl9k4\" (UniqueName: \"kubernetes.io/projected/7c2fde86-e7c8-4605-9750-8464ca4b7d58-kube-api-access-bl9k4\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732720 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732745 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85zz9\" (UniqueName: \"kubernetes.io/projected/130b46c6-e7e5-4202-bea4-1214ec4766e8-kube-api-access-85zz9\") pod \"cinder-scheduler-0\" (UID: \"130b46c6-e7e5-4202-bea4-1214ec4766e8\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732769 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732793 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c2fde86-e7c8-4605-9750-8464ca4b7d58-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732822 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/130b46c6-e7e5-4202-bea4-1214ec4766e8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"130b46c6-e7e5-4202-bea4-1214ec4766e8\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732847 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27df3c21-5ecb-4af0-9e48-a40f826dc75d-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732878 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732915 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-etc-nvme\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732959 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/130b46c6-e7e5-4202-bea4-1214ec4766e8-scripts\") pod \"cinder-scheduler-0\" (UID: \"130b46c6-e7e5-4202-bea4-1214ec4766e8\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.732993 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/130b46c6-e7e5-4202-bea4-1214ec4766e8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"130b46c6-e7e5-4202-bea4-1214ec4766e8\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.733015 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c2fde86-e7c8-4605-9750-8464ca4b7d58-config-data\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.733048 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27df3c21-5ecb-4af0-9e48-a40f826dc75d-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.733070 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl96t\" (UniqueName: \"kubernetes.io/projected/27df3c21-5ecb-4af0-9e48-a40f826dc75d-kube-api-access-jl96t\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.733089 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-sys\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.733111 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-sys\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.733964 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/130b46c6-e7e5-4202-bea4-1214ec4766e8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"130b46c6-e7e5-4202-bea4-1214ec4766e8\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.742725 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/130b46c6-e7e5-4202-bea4-1214ec4766e8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"130b46c6-e7e5-4202-bea4-1214ec4766e8\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.743206 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/130b46c6-e7e5-4202-bea4-1214ec4766e8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"130b46c6-e7e5-4202-bea4-1214ec4766e8\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.744724 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/130b46c6-e7e5-4202-bea4-1214ec4766e8-scripts\") pod \"cinder-scheduler-0\" (UID: \"130b46c6-e7e5-4202-bea4-1214ec4766e8\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.748814 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/130b46c6-e7e5-4202-bea4-1214ec4766e8-config-data\") pod \"cinder-scheduler-0\" (UID: \"130b46c6-e7e5-4202-bea4-1214ec4766e8\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.754002 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.755408 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.758853 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-api-config-data" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.759050 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cert-cinder-public-svc" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.759855 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cert-cinder-internal-svc" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.762335 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.769812 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85zz9\" (UniqueName: \"kubernetes.io/projected/130b46c6-e7e5-4202-bea4-1214ec4766e8-kube-api-access-85zz9\") pod \"cinder-scheduler-0\" (UID: \"130b46c6-e7e5-4202-bea4-1214ec4766e8\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.834286 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c2fde86-e7c8-4605-9750-8464ca4b7d58-config-data\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.834351 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27df3c21-5ecb-4af0-9e48-a40f826dc75d-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.834375 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl96t\" (UniqueName: \"kubernetes.io/projected/27df3c21-5ecb-4af0-9e48-a40f826dc75d-kube-api-access-jl96t\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.834395 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-sys\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.834416 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-sys\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.834435 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.834456 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27df3c21-5ecb-4af0-9e48-a40f826dc75d-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.834477 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.834501 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/93eb21c7-d0f9-4648-a671-03d3ccd28429-etc-machine-id\") pod \"cinder-api-0\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.834519 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-config-data\") pod \"cinder-api-0\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.835037 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-config-data-custom\") pod \"cinder-api-0\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.835124 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-run\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.835151 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-public-tls-certs\") pod \"cinder-api-0\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.835171 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.835189 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.835246 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-run\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.835297 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.835319 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-dev\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.835344 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.835365 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-lib-modules\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.835399 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-dev\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.835454 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.835481 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.835504 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.835550 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.835577 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27df3c21-5ecb-4af0-9e48-a40f826dc75d-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.835608 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.835642 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c2fde86-e7c8-4605-9750-8464ca4b7d58-config-data-custom\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.835682 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c2fde86-e7c8-4605-9750-8464ca4b7d58-scripts\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.835716 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-scripts\") pod \"cinder-api-0\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.835753 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl9k4\" (UniqueName: \"kubernetes.io/projected/7c2fde86-e7c8-4605-9750-8464ca4b7d58-kube-api-access-bl9k4\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.835790 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.835829 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.835854 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c2fde86-e7c8-4605-9750-8464ca4b7d58-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.835884 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93eb21c7-d0f9-4648-a671-03d3ccd28429-logs\") pod \"cinder-api-0\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.835927 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-str9z\" (UniqueName: \"kubernetes.io/projected/93eb21c7-d0f9-4648-a671-03d3ccd28429-kube-api-access-str9z\") pod \"cinder-api-0\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.836008 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27df3c21-5ecb-4af0-9e48-a40f826dc75d-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.836033 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.836071 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.836109 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-etc-nvme\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.836259 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-etc-nvme\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.836307 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-run\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.836340 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.836366 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-run\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.836411 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.836441 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-dev\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.836469 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.836500 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-lib-modules\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.836526 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-dev\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.836570 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.836597 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.836628 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.836695 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.836728 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-sys\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.836744 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.837201 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.837581 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.837723 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.837766 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-sys\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.838697 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c2fde86-e7c8-4605-9750-8464ca4b7d58-config-data\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.840140 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27df3c21-5ecb-4af0-9e48-a40f826dc75d-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.840462 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c2fde86-e7c8-4605-9750-8464ca4b7d58-scripts\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.840481 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c2fde86-e7c8-4605-9750-8464ca4b7d58-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.841146 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c2fde86-e7c8-4605-9750-8464ca4b7d58-config-data-custom\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.841543 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27df3c21-5ecb-4af0-9e48-a40f826dc75d-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.843740 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27df3c21-5ecb-4af0-9e48-a40f826dc75d-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.848541 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27df3c21-5ecb-4af0-9e48-a40f826dc75d-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.855849 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl96t\" (UniqueName: \"kubernetes.io/projected/27df3c21-5ecb-4af0-9e48-a40f826dc75d-kube-api-access-jl96t\") pod \"cinder-volume-volume1-0\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.859530 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl9k4\" (UniqueName: \"kubernetes.io/projected/7c2fde86-e7c8-4605-9750-8464ca4b7d58-kube-api-access-bl9k4\") pod \"cinder-backup-0\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.861179 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.922711 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.937914 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/93eb21c7-d0f9-4648-a671-03d3ccd28429-etc-machine-id\") pod \"cinder-api-0\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.937993 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-config-data\") pod \"cinder-api-0\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.938016 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-config-data-custom\") pod \"cinder-api-0\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.938022 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/93eb21c7-d0f9-4648-a671-03d3ccd28429-etc-machine-id\") pod \"cinder-api-0\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.938046 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-public-tls-certs\") pod \"cinder-api-0\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.938154 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.938231 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-scripts\") pod \"cinder-api-0\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.938289 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93eb21c7-d0f9-4648-a671-03d3ccd28429-logs\") pod \"cinder-api-0\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.938326 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-str9z\" (UniqueName: \"kubernetes.io/projected/93eb21c7-d0f9-4648-a671-03d3ccd28429-kube-api-access-str9z\") pod \"cinder-api-0\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.938358 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.939247 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93eb21c7-d0f9-4648-a671-03d3ccd28429-logs\") pod \"cinder-api-0\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.945201 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.945216 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-public-tls-certs\") pod \"cinder-api-0\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.946374 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-config-data\") pod \"cinder-api-0\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.946654 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.949242 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-scripts\") pod \"cinder-api-0\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.956632 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-config-data-custom\") pod \"cinder-api-0\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.961673 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:15 crc kubenswrapper[4714]: I0129 16:30:15.976619 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-str9z\" (UniqueName: \"kubernetes.io/projected/93eb21c7-d0f9-4648-a671-03d3ccd28429-kube-api-access-str9z\") pod \"cinder-api-0\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:16 crc kubenswrapper[4714]: I0129 16:30:16.125315 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:16 crc kubenswrapper[4714]: I0129 16:30:16.319172 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 29 16:30:16 crc kubenswrapper[4714]: I0129 16:30:16.418206 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 29 16:30:16 crc kubenswrapper[4714]: W0129 16:30:16.428473 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c2fde86_e7c8_4605_9750_8464ca4b7d58.slice/crio-e25204d76e2148fee59f9d4ff12c3c8594f24cfcc42364e257052da501eeec8a WatchSource:0}: Error finding container e25204d76e2148fee59f9d4ff12c3c8594f24cfcc42364e257052da501eeec8a: Status 404 returned error can't find the container with id e25204d76e2148fee59f9d4ff12c3c8594f24cfcc42364e257052da501eeec8a Jan 29 16:30:16 crc kubenswrapper[4714]: I0129 16:30:16.526945 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 29 16:30:16 crc kubenswrapper[4714]: W0129 16:30:16.536012 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27df3c21_5ecb_4af0_9e48_a40f826dc75d.slice/crio-a215b87a9f7bec152d91f39a039625249d594cfc1ffc1ed4fb03f005b6ae19e1 WatchSource:0}: Error finding container a215b87a9f7bec152d91f39a039625249d594cfc1ffc1ed4fb03f005b6ae19e1: Status 404 returned error can't find the container with id a215b87a9f7bec152d91f39a039625249d594cfc1ffc1ed4fb03f005b6ae19e1 Jan 29 16:30:16 crc kubenswrapper[4714]: I0129 16:30:16.660463 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 29 16:30:16 crc kubenswrapper[4714]: W0129 16:30:16.668809 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93eb21c7_d0f9_4648_a671_03d3ccd28429.slice/crio-9cf9a3f95739dab5cf604ede39f2741a3a785d75007c516172ae75ff6b3665b1 WatchSource:0}: Error finding container 9cf9a3f95739dab5cf604ede39f2741a3a785d75007c516172ae75ff6b3665b1: Status 404 returned error can't find the container with id 9cf9a3f95739dab5cf604ede39f2741a3a785d75007c516172ae75ff6b3665b1 Jan 29 16:30:17 crc kubenswrapper[4714]: I0129 16:30:17.305056 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"27df3c21-5ecb-4af0-9e48-a40f826dc75d","Type":"ContainerStarted","Data":"4565e725bbc777847d5f682f8f2a88a1faea9eac4df83a536e5f10e10af89617"} Jan 29 16:30:17 crc kubenswrapper[4714]: I0129 16:30:17.305597 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"27df3c21-5ecb-4af0-9e48-a40f826dc75d","Type":"ContainerStarted","Data":"6825612e424e7d18534f6766e6735d779be6e5909c3365b087bf62b9bfd4f305"} Jan 29 16:30:17 crc kubenswrapper[4714]: I0129 16:30:17.305610 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"27df3c21-5ecb-4af0-9e48-a40f826dc75d","Type":"ContainerStarted","Data":"a215b87a9f7bec152d91f39a039625249d594cfc1ffc1ed4fb03f005b6ae19e1"} Jan 29 16:30:17 crc kubenswrapper[4714]: I0129 16:30:17.312502 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"93eb21c7-d0f9-4648-a671-03d3ccd28429","Type":"ContainerStarted","Data":"b54fd3f2968e79a20a0b8cb352bdc32d69a2daf77ebdec8342b8fb48a253ab80"} Jan 29 16:30:17 crc kubenswrapper[4714]: I0129 16:30:17.312551 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"93eb21c7-d0f9-4648-a671-03d3ccd28429","Type":"ContainerStarted","Data":"9cf9a3f95739dab5cf604ede39f2741a3a785d75007c516172ae75ff6b3665b1"} Jan 29 16:30:17 crc kubenswrapper[4714]: I0129 16:30:17.333352 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"7c2fde86-e7c8-4605-9750-8464ca4b7d58","Type":"ContainerStarted","Data":"c236db1e5dbb8d372959347da33330e9e06dab48278781aa11dc0c6a6f372af8"} Jan 29 16:30:17 crc kubenswrapper[4714]: I0129 16:30:17.333409 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"7c2fde86-e7c8-4605-9750-8464ca4b7d58","Type":"ContainerStarted","Data":"2caedfb03564966ae2d6b87961e4e7d74fd983fe0c16c41eb96e2d6ca6ebd267"} Jan 29 16:30:17 crc kubenswrapper[4714]: I0129 16:30:17.333425 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"7c2fde86-e7c8-4605-9750-8464ca4b7d58","Type":"ContainerStarted","Data":"e25204d76e2148fee59f9d4ff12c3c8594f24cfcc42364e257052da501eeec8a"} Jan 29 16:30:17 crc kubenswrapper[4714]: I0129 16:30:17.335369 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podStartSLOduration=2.335355013 podStartE2EDuration="2.335355013s" podCreationTimestamp="2026-01-29 16:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:17.324560676 +0000 UTC m=+1223.845061796" watchObservedRunningTime="2026-01-29 16:30:17.335355013 +0000 UTC m=+1223.855856133" Jan 29 16:30:17 crc kubenswrapper[4714]: I0129 16:30:17.343085 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"130b46c6-e7e5-4202-bea4-1214ec4766e8","Type":"ContainerStarted","Data":"bccd615573c7599b167665a1108953cb3d1759c6fb3d978bae9a4b7e75fbc11a"} Jan 29 16:30:17 crc kubenswrapper[4714]: I0129 16:30:17.343125 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"130b46c6-e7e5-4202-bea4-1214ec4766e8","Type":"ContainerStarted","Data":"4117860f9b4009b5187f645af44ec4d5a39f7166f668190572db33d3a8f8e24c"} Jan 29 16:30:17 crc kubenswrapper[4714]: I0129 16:30:17.366715 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-backup-0" podStartSLOduration=2.3666976330000002 podStartE2EDuration="2.366697633s" podCreationTimestamp="2026-01-29 16:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:17.359308807 +0000 UTC m=+1223.879809937" watchObservedRunningTime="2026-01-29 16:30:17.366697633 +0000 UTC m=+1223.887198753" Jan 29 16:30:18 crc kubenswrapper[4714]: I0129 16:30:18.373614 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"130b46c6-e7e5-4202-bea4-1214ec4766e8","Type":"ContainerStarted","Data":"bbf7723cee6103ad631b2d89beaf6a72a626280c58f32453fc92b4acddbf7202"} Jan 29 16:30:18 crc kubenswrapper[4714]: I0129 16:30:18.378728 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"93eb21c7-d0f9-4648-a671-03d3ccd28429","Type":"ContainerStarted","Data":"0c9e8f216ea8d08fef702164fee1c489d59f45d35804acfb1d8d5c8a08af65d1"} Jan 29 16:30:18 crc kubenswrapper[4714]: I0129 16:30:18.398154 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-scheduler-0" podStartSLOduration=3.39813713 podStartE2EDuration="3.39813713s" podCreationTimestamp="2026-01-29 16:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:18.396986616 +0000 UTC m=+1224.917487736" watchObservedRunningTime="2026-01-29 16:30:18.39813713 +0000 UTC m=+1224.918638270" Jan 29 16:30:18 crc kubenswrapper[4714]: I0129 16:30:18.424955 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-api-0" podStartSLOduration=3.424926214 podStartE2EDuration="3.424926214s" podCreationTimestamp="2026-01-29 16:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:18.42046594 +0000 UTC m=+1224.940967070" watchObservedRunningTime="2026-01-29 16:30:18.424926214 +0000 UTC m=+1224.945427334" Jan 29 16:30:19 crc kubenswrapper[4714]: I0129 16:30:19.387700 4714 generic.go:334] "Generic (PLEG): container finished" podID="27df3c21-5ecb-4af0-9e48-a40f826dc75d" containerID="4565e725bbc777847d5f682f8f2a88a1faea9eac4df83a536e5f10e10af89617" exitCode=1 Jan 29 16:30:19 crc kubenswrapper[4714]: I0129 16:30:19.387740 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"27df3c21-5ecb-4af0-9e48-a40f826dc75d","Type":"ContainerDied","Data":"4565e725bbc777847d5f682f8f2a88a1faea9eac4df83a536e5f10e10af89617"} Jan 29 16:30:19 crc kubenswrapper[4714]: I0129 16:30:19.388339 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:19 crc kubenswrapper[4714]: I0129 16:30:19.388698 4714 scope.go:117] "RemoveContainer" containerID="4565e725bbc777847d5f682f8f2a88a1faea9eac4df83a536e5f10e10af89617" Jan 29 16:30:20 crc kubenswrapper[4714]: I0129 16:30:20.397684 4714 generic.go:334] "Generic (PLEG): container finished" podID="27df3c21-5ecb-4af0-9e48-a40f826dc75d" containerID="6825612e424e7d18534f6766e6735d779be6e5909c3365b087bf62b9bfd4f305" exitCode=1 Jan 29 16:30:20 crc kubenswrapper[4714]: I0129 16:30:20.397885 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"27df3c21-5ecb-4af0-9e48-a40f826dc75d","Type":"ContainerDied","Data":"6825612e424e7d18534f6766e6735d779be6e5909c3365b087bf62b9bfd4f305"} Jan 29 16:30:20 crc kubenswrapper[4714]: I0129 16:30:20.398534 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"27df3c21-5ecb-4af0-9e48-a40f826dc75d","Type":"ContainerStarted","Data":"998fc5de34fad62db822ad95298396d20ef6c7ee52c10eeb679ec05818c2f0ea"} Jan 29 16:30:20 crc kubenswrapper[4714]: I0129 16:30:20.399090 4714 scope.go:117] "RemoveContainer" containerID="6825612e424e7d18534f6766e6735d779be6e5909c3365b087bf62b9bfd4f305" Jan 29 16:30:20 crc kubenswrapper[4714]: I0129 16:30:20.862600 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:30:20 crc kubenswrapper[4714]: I0129 16:30:20.924031 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:20 crc kubenswrapper[4714]: I0129 16:30:20.963064 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:20 crc kubenswrapper[4714]: I0129 16:30:20.963139 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:21 crc kubenswrapper[4714]: I0129 16:30:21.406860 4714 generic.go:334] "Generic (PLEG): container finished" podID="27df3c21-5ecb-4af0-9e48-a40f826dc75d" containerID="998fc5de34fad62db822ad95298396d20ef6c7ee52c10eeb679ec05818c2f0ea" exitCode=1 Jan 29 16:30:21 crc kubenswrapper[4714]: I0129 16:30:21.406913 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"27df3c21-5ecb-4af0-9e48-a40f826dc75d","Type":"ContainerDied","Data":"998fc5de34fad62db822ad95298396d20ef6c7ee52c10eeb679ec05818c2f0ea"} Jan 29 16:30:21 crc kubenswrapper[4714]: I0129 16:30:21.406963 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"27df3c21-5ecb-4af0-9e48-a40f826dc75d","Type":"ContainerStarted","Data":"28c248e70e5735ed873359efc695cf5ba4233b35ef2d24a816c0f50304261061"} Jan 29 16:30:21 crc kubenswrapper[4714]: I0129 16:30:21.406980 4714 scope.go:117] "RemoveContainer" containerID="4565e725bbc777847d5f682f8f2a88a1faea9eac4df83a536e5f10e10af89617" Jan 29 16:30:21 crc kubenswrapper[4714]: I0129 16:30:21.408428 4714 scope.go:117] "RemoveContainer" containerID="998fc5de34fad62db822ad95298396d20ef6c7ee52c10eeb679ec05818c2f0ea" Jan 29 16:30:21 crc kubenswrapper[4714]: E0129 16:30:21.408769 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(27df3c21-5ecb-4af0-9e48-a40f826dc75d)\"" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="27df3c21-5ecb-4af0-9e48-a40f826dc75d" Jan 29 16:30:22 crc kubenswrapper[4714]: I0129 16:30:22.416971 4714 generic.go:334] "Generic (PLEG): container finished" podID="27df3c21-5ecb-4af0-9e48-a40f826dc75d" containerID="28c248e70e5735ed873359efc695cf5ba4233b35ef2d24a816c0f50304261061" exitCode=1 Jan 29 16:30:22 crc kubenswrapper[4714]: I0129 16:30:22.417016 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"27df3c21-5ecb-4af0-9e48-a40f826dc75d","Type":"ContainerDied","Data":"28c248e70e5735ed873359efc695cf5ba4233b35ef2d24a816c0f50304261061"} Jan 29 16:30:22 crc kubenswrapper[4714]: I0129 16:30:22.417054 4714 scope.go:117] "RemoveContainer" containerID="6825612e424e7d18534f6766e6735d779be6e5909c3365b087bf62b9bfd4f305" Jan 29 16:30:22 crc kubenswrapper[4714]: I0129 16:30:22.418542 4714 scope.go:117] "RemoveContainer" containerID="28c248e70e5735ed873359efc695cf5ba4233b35ef2d24a816c0f50304261061" Jan 29 16:30:22 crc kubenswrapper[4714]: I0129 16:30:22.418576 4714 scope.go:117] "RemoveContainer" containerID="998fc5de34fad62db822ad95298396d20ef6c7ee52c10eeb679ec05818c2f0ea" Jan 29 16:30:22 crc kubenswrapper[4714]: E0129 16:30:22.419062 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(27df3c21-5ecb-4af0-9e48-a40f826dc75d)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(27df3c21-5ecb-4af0-9e48-a40f826dc75d)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="27df3c21-5ecb-4af0-9e48-a40f826dc75d" Jan 29 16:30:23 crc kubenswrapper[4714]: I0129 16:30:23.430296 4714 scope.go:117] "RemoveContainer" containerID="28c248e70e5735ed873359efc695cf5ba4233b35ef2d24a816c0f50304261061" Jan 29 16:30:23 crc kubenswrapper[4714]: I0129 16:30:23.430353 4714 scope.go:117] "RemoveContainer" containerID="998fc5de34fad62db822ad95298396d20ef6c7ee52c10eeb679ec05818c2f0ea" Jan 29 16:30:23 crc kubenswrapper[4714]: E0129 16:30:23.430834 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(27df3c21-5ecb-4af0-9e48-a40f826dc75d)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(27df3c21-5ecb-4af0-9e48-a40f826dc75d)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="27df3c21-5ecb-4af0-9e48-a40f826dc75d" Jan 29 16:30:24 crc kubenswrapper[4714]: E0129 16:30:24.190248 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:30:24 crc kubenswrapper[4714]: I0129 16:30:24.962845 4714 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:24 crc kubenswrapper[4714]: I0129 16:30:24.964113 4714 scope.go:117] "RemoveContainer" containerID="28c248e70e5735ed873359efc695cf5ba4233b35ef2d24a816c0f50304261061" Jan 29 16:30:24 crc kubenswrapper[4714]: I0129 16:30:24.964143 4714 scope.go:117] "RemoveContainer" containerID="998fc5de34fad62db822ad95298396d20ef6c7ee52c10eeb679ec05818c2f0ea" Jan 29 16:30:24 crc kubenswrapper[4714]: E0129 16:30:24.964704 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(27df3c21-5ecb-4af0-9e48-a40f826dc75d)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(27df3c21-5ecb-4af0-9e48-a40f826dc75d)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="27df3c21-5ecb-4af0-9e48-a40f826dc75d" Jan 29 16:30:25 crc kubenswrapper[4714]: I0129 16:30:25.962985 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:25 crc kubenswrapper[4714]: I0129 16:30:25.963344 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:25 crc kubenswrapper[4714]: I0129 16:30:25.964102 4714 scope.go:117] "RemoveContainer" containerID="28c248e70e5735ed873359efc695cf5ba4233b35ef2d24a816c0f50304261061" Jan 29 16:30:25 crc kubenswrapper[4714]: I0129 16:30:25.964118 4714 scope.go:117] "RemoveContainer" containerID="998fc5de34fad62db822ad95298396d20ef6c7ee52c10eeb679ec05818c2f0ea" Jan 29 16:30:25 crc kubenswrapper[4714]: E0129 16:30:25.964568 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(27df3c21-5ecb-4af0-9e48-a40f826dc75d)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(27df3c21-5ecb-4af0-9e48-a40f826dc75d)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="27df3c21-5ecb-4af0-9e48-a40f826dc75d" Jan 29 16:30:26 crc kubenswrapper[4714]: I0129 16:30:26.133826 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:30:26 crc kubenswrapper[4714]: I0129 16:30:26.144560 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:27 crc kubenswrapper[4714]: I0129 16:30:27.843912 4714 patch_prober.go:28] interesting pod/machine-config-daemon-ppngk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:30:27 crc kubenswrapper[4714]: I0129 16:30:27.844291 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:30:28 crc kubenswrapper[4714]: I0129 16:30:28.106788 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.110640 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-zpbtx"] Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.121775 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-zpbtx"] Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.168094 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.168509 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-0" podUID="7c2fde86-e7c8-4605-9750-8464ca4b7d58" containerName="cinder-backup" containerID="cri-o://2caedfb03564966ae2d6b87961e4e7d74fd983fe0c16c41eb96e2d6ca6ebd267" gracePeriod=30 Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.169023 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-0" podUID="7c2fde86-e7c8-4605-9750-8464ca4b7d58" containerName="probe" containerID="cri-o://c236db1e5dbb8d372959347da33330e9e06dab48278781aa11dc0c6a6f372af8" gracePeriod=30 Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.202027 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.214280 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.214551 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-0" podUID="130b46c6-e7e5-4202-bea4-1214ec4766e8" containerName="cinder-scheduler" containerID="cri-o://bccd615573c7599b167665a1108953cb3d1759c6fb3d978bae9a4b7e75fbc11a" gracePeriod=30 Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.214684 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-0" podUID="130b46c6-e7e5-4202-bea4-1214ec4766e8" containerName="probe" containerID="cri-o://bbf7723cee6103ad631b2d89beaf6a72a626280c58f32453fc92b4acddbf7202" gracePeriod=30 Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.228793 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.229057 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-0" podUID="93eb21c7-d0f9-4648-a671-03d3ccd28429" containerName="cinder-api-log" containerID="cri-o://b54fd3f2968e79a20a0b8cb352bdc32d69a2daf77ebdec8342b8fb48a253ab80" gracePeriod=30 Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.229176 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-0" podUID="93eb21c7-d0f9-4648-a671-03d3ccd28429" containerName="cinder-api" containerID="cri-o://0c9e8f216ea8d08fef702164fee1c489d59f45d35804acfb1d8d5c8a08af65d1" gracePeriod=30 Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.234077 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder6e48-account-delete-8kw8k"] Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.235090 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder6e48-account-delete-8kw8k" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.238539 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="cinder-kuttl-tests/cinder-api-0" podUID="93eb21c7-d0f9-4648-a671-03d3ccd28429" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.109:8776/healthcheck\": EOF" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.247467 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder6e48-account-delete-8kw8k"] Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.360963 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2da2589f-6f6a-4921-bc7e-70e1b62979f1-operator-scripts\") pod \"cinder6e48-account-delete-8kw8k\" (UID: \"2da2589f-6f6a-4921-bc7e-70e1b62979f1\") " pod="cinder-kuttl-tests/cinder6e48-account-delete-8kw8k" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.361012 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56bph\" (UniqueName: \"kubernetes.io/projected/2da2589f-6f6a-4921-bc7e-70e1b62979f1-kube-api-access-56bph\") pod \"cinder6e48-account-delete-8kw8k\" (UID: \"2da2589f-6f6a-4921-bc7e-70e1b62979f1\") " pod="cinder-kuttl-tests/cinder6e48-account-delete-8kw8k" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.461888 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2da2589f-6f6a-4921-bc7e-70e1b62979f1-operator-scripts\") pod \"cinder6e48-account-delete-8kw8k\" (UID: \"2da2589f-6f6a-4921-bc7e-70e1b62979f1\") " pod="cinder-kuttl-tests/cinder6e48-account-delete-8kw8k" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.461966 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56bph\" (UniqueName: \"kubernetes.io/projected/2da2589f-6f6a-4921-bc7e-70e1b62979f1-kube-api-access-56bph\") pod \"cinder6e48-account-delete-8kw8k\" (UID: \"2da2589f-6f6a-4921-bc7e-70e1b62979f1\") " pod="cinder-kuttl-tests/cinder6e48-account-delete-8kw8k" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.463055 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2da2589f-6f6a-4921-bc7e-70e1b62979f1-operator-scripts\") pod \"cinder6e48-account-delete-8kw8k\" (UID: \"2da2589f-6f6a-4921-bc7e-70e1b62979f1\") " pod="cinder-kuttl-tests/cinder6e48-account-delete-8kw8k" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.477557 4714 generic.go:334] "Generic (PLEG): container finished" podID="93eb21c7-d0f9-4648-a671-03d3ccd28429" containerID="b54fd3f2968e79a20a0b8cb352bdc32d69a2daf77ebdec8342b8fb48a253ab80" exitCode=143 Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.477610 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"93eb21c7-d0f9-4648-a671-03d3ccd28429","Type":"ContainerDied","Data":"b54fd3f2968e79a20a0b8cb352bdc32d69a2daf77ebdec8342b8fb48a253ab80"} Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.503559 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56bph\" (UniqueName: \"kubernetes.io/projected/2da2589f-6f6a-4921-bc7e-70e1b62979f1-kube-api-access-56bph\") pod \"cinder6e48-account-delete-8kw8k\" (UID: \"2da2589f-6f6a-4921-bc7e-70e1b62979f1\") " pod="cinder-kuttl-tests/cinder6e48-account-delete-8kw8k" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.567686 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.569158 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder6e48-account-delete-8kw8k" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.664947 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27df3c21-5ecb-4af0-9e48-a40f826dc75d-combined-ca-bundle\") pod \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.664999 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-etc-iscsi\") pod \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.665020 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-etc-nvme\") pod \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.665048 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-lib-modules\") pod \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.665122 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27df3c21-5ecb-4af0-9e48-a40f826dc75d-config-data\") pod \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.665154 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-var-locks-brick\") pod \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.665192 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-run\") pod \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.665233 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl96t\" (UniqueName: \"kubernetes.io/projected/27df3c21-5ecb-4af0-9e48-a40f826dc75d-kube-api-access-jl96t\") pod \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.665258 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-etc-machine-id\") pod \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.665277 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-var-lib-cinder\") pod \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.665296 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-dev\") pod \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.665339 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27df3c21-5ecb-4af0-9e48-a40f826dc75d-config-data-custom\") pod \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.665366 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-sys\") pod \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.665405 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-var-locks-cinder\") pod \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.665430 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27df3c21-5ecb-4af0-9e48-a40f826dc75d-scripts\") pod \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\" (UID: \"27df3c21-5ecb-4af0-9e48-a40f826dc75d\") " Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.666619 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "27df3c21-5ecb-4af0-9e48-a40f826dc75d" (UID: "27df3c21-5ecb-4af0-9e48-a40f826dc75d"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.666677 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "27df3c21-5ecb-4af0-9e48-a40f826dc75d" (UID: "27df3c21-5ecb-4af0-9e48-a40f826dc75d"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.666697 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-run" (OuterVolumeSpecName: "run") pod "27df3c21-5ecb-4af0-9e48-a40f826dc75d" (UID: "27df3c21-5ecb-4af0-9e48-a40f826dc75d"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.668174 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "27df3c21-5ecb-4af0-9e48-a40f826dc75d" (UID: "27df3c21-5ecb-4af0-9e48-a40f826dc75d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.668337 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-dev" (OuterVolumeSpecName: "dev") pod "27df3c21-5ecb-4af0-9e48-a40f826dc75d" (UID: "27df3c21-5ecb-4af0-9e48-a40f826dc75d"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.668344 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "27df3c21-5ecb-4af0-9e48-a40f826dc75d" (UID: "27df3c21-5ecb-4af0-9e48-a40f826dc75d"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.668365 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "27df3c21-5ecb-4af0-9e48-a40f826dc75d" (UID: "27df3c21-5ecb-4af0-9e48-a40f826dc75d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.668377 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-sys" (OuterVolumeSpecName: "sys") pod "27df3c21-5ecb-4af0-9e48-a40f826dc75d" (UID: "27df3c21-5ecb-4af0-9e48-a40f826dc75d"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.668360 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "27df3c21-5ecb-4af0-9e48-a40f826dc75d" (UID: "27df3c21-5ecb-4af0-9e48-a40f826dc75d"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.668398 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "27df3c21-5ecb-4af0-9e48-a40f826dc75d" (UID: "27df3c21-5ecb-4af0-9e48-a40f826dc75d"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.669117 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27df3c21-5ecb-4af0-9e48-a40f826dc75d-scripts" (OuterVolumeSpecName: "scripts") pod "27df3c21-5ecb-4af0-9e48-a40f826dc75d" (UID: "27df3c21-5ecb-4af0-9e48-a40f826dc75d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.671281 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27df3c21-5ecb-4af0-9e48-a40f826dc75d-kube-api-access-jl96t" (OuterVolumeSpecName: "kube-api-access-jl96t") pod "27df3c21-5ecb-4af0-9e48-a40f826dc75d" (UID: "27df3c21-5ecb-4af0-9e48-a40f826dc75d"). InnerVolumeSpecName "kube-api-access-jl96t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.674096 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27df3c21-5ecb-4af0-9e48-a40f826dc75d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "27df3c21-5ecb-4af0-9e48-a40f826dc75d" (UID: "27df3c21-5ecb-4af0-9e48-a40f826dc75d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.709342 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27df3c21-5ecb-4af0-9e48-a40f826dc75d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27df3c21-5ecb-4af0-9e48-a40f826dc75d" (UID: "27df3c21-5ecb-4af0-9e48-a40f826dc75d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.743613 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27df3c21-5ecb-4af0-9e48-a40f826dc75d-config-data" (OuterVolumeSpecName: "config-data") pod "27df3c21-5ecb-4af0-9e48-a40f826dc75d" (UID: "27df3c21-5ecb-4af0-9e48-a40f826dc75d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.766617 4714 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27df3c21-5ecb-4af0-9e48-a40f826dc75d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.766649 4714 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-sys\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.766659 4714 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.766669 4714 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27df3c21-5ecb-4af0-9e48-a40f826dc75d-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.766676 4714 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27df3c21-5ecb-4af0-9e48-a40f826dc75d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.766684 4714 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.766692 4714 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.766700 4714 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.766709 4714 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27df3c21-5ecb-4af0-9e48-a40f826dc75d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.766719 4714 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.766726 4714 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-run\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.766733 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl96t\" (UniqueName: \"kubernetes.io/projected/27df3c21-5ecb-4af0-9e48-a40f826dc75d-kube-api-access-jl96t\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.766744 4714 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.766751 4714 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:29 crc kubenswrapper[4714]: I0129 16:30:29.766760 4714 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/27df3c21-5ecb-4af0-9e48-a40f826dc75d-dev\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:30 crc kubenswrapper[4714]: I0129 16:30:30.004471 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder6e48-account-delete-8kw8k"] Jan 29 16:30:30 crc kubenswrapper[4714]: W0129 16:30:30.011064 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2da2589f_6f6a_4921_bc7e_70e1b62979f1.slice/crio-47333c12ea7c2648d34e992cfdd83e276361abff785531f101ec9270b9603aa3 WatchSource:0}: Error finding container 47333c12ea7c2648d34e992cfdd83e276361abff785531f101ec9270b9603aa3: Status 404 returned error can't find the container with id 47333c12ea7c2648d34e992cfdd83e276361abff785531f101ec9270b9603aa3 Jan 29 16:30:30 crc kubenswrapper[4714]: I0129 16:30:30.192285 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cabdde2-0578-405a-9147-efe4d1db7e90" path="/var/lib/kubelet/pods/0cabdde2-0578-405a-9147-efe4d1db7e90/volumes" Jan 29 16:30:30 crc kubenswrapper[4714]: I0129 16:30:30.485718 4714 generic.go:334] "Generic (PLEG): container finished" podID="130b46c6-e7e5-4202-bea4-1214ec4766e8" containerID="bbf7723cee6103ad631b2d89beaf6a72a626280c58f32453fc92b4acddbf7202" exitCode=0 Jan 29 16:30:30 crc kubenswrapper[4714]: I0129 16:30:30.485781 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"130b46c6-e7e5-4202-bea4-1214ec4766e8","Type":"ContainerDied","Data":"bbf7723cee6103ad631b2d89beaf6a72a626280c58f32453fc92b4acddbf7202"} Jan 29 16:30:30 crc kubenswrapper[4714]: I0129 16:30:30.489093 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"27df3c21-5ecb-4af0-9e48-a40f826dc75d","Type":"ContainerDied","Data":"a215b87a9f7bec152d91f39a039625249d594cfc1ffc1ed4fb03f005b6ae19e1"} Jan 29 16:30:30 crc kubenswrapper[4714]: I0129 16:30:30.489150 4714 scope.go:117] "RemoveContainer" containerID="28c248e70e5735ed873359efc695cf5ba4233b35ef2d24a816c0f50304261061" Jan 29 16:30:30 crc kubenswrapper[4714]: I0129 16:30:30.489149 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 29 16:30:30 crc kubenswrapper[4714]: I0129 16:30:30.491335 4714 generic.go:334] "Generic (PLEG): container finished" podID="2da2589f-6f6a-4921-bc7e-70e1b62979f1" containerID="5aeecda1a40201485f4391ac8cf0a5c17c26ea2f9167f27b15c6c783da6f0f44" exitCode=0 Jan 29 16:30:30 crc kubenswrapper[4714]: I0129 16:30:30.491476 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder6e48-account-delete-8kw8k" event={"ID":"2da2589f-6f6a-4921-bc7e-70e1b62979f1","Type":"ContainerDied","Data":"5aeecda1a40201485f4391ac8cf0a5c17c26ea2f9167f27b15c6c783da6f0f44"} Jan 29 16:30:30 crc kubenswrapper[4714]: I0129 16:30:30.491504 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder6e48-account-delete-8kw8k" event={"ID":"2da2589f-6f6a-4921-bc7e-70e1b62979f1","Type":"ContainerStarted","Data":"47333c12ea7c2648d34e992cfdd83e276361abff785531f101ec9270b9603aa3"} Jan 29 16:30:30 crc kubenswrapper[4714]: I0129 16:30:30.496261 4714 generic.go:334] "Generic (PLEG): container finished" podID="7c2fde86-e7c8-4605-9750-8464ca4b7d58" containerID="c236db1e5dbb8d372959347da33330e9e06dab48278781aa11dc0c6a6f372af8" exitCode=0 Jan 29 16:30:30 crc kubenswrapper[4714]: I0129 16:30:30.496350 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"7c2fde86-e7c8-4605-9750-8464ca4b7d58","Type":"ContainerDied","Data":"c236db1e5dbb8d372959347da33330e9e06dab48278781aa11dc0c6a6f372af8"} Jan 29 16:30:30 crc kubenswrapper[4714]: I0129 16:30:30.524168 4714 scope.go:117] "RemoveContainer" containerID="998fc5de34fad62db822ad95298396d20ef6c7ee52c10eeb679ec05818c2f0ea" Jan 29 16:30:30 crc kubenswrapper[4714]: I0129 16:30:30.542269 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 29 16:30:30 crc kubenswrapper[4714]: I0129 16:30:30.548698 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 29 16:30:32 crc kubenswrapper[4714]: I0129 16:30:31.785590 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder6e48-account-delete-8kw8k" Jan 29 16:30:32 crc kubenswrapper[4714]: I0129 16:30:31.899170 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2da2589f-6f6a-4921-bc7e-70e1b62979f1-operator-scripts\") pod \"2da2589f-6f6a-4921-bc7e-70e1b62979f1\" (UID: \"2da2589f-6f6a-4921-bc7e-70e1b62979f1\") " Jan 29 16:30:32 crc kubenswrapper[4714]: I0129 16:30:31.899266 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56bph\" (UniqueName: \"kubernetes.io/projected/2da2589f-6f6a-4921-bc7e-70e1b62979f1-kube-api-access-56bph\") pod \"2da2589f-6f6a-4921-bc7e-70e1b62979f1\" (UID: \"2da2589f-6f6a-4921-bc7e-70e1b62979f1\") " Jan 29 16:30:32 crc kubenswrapper[4714]: I0129 16:30:31.900054 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2da2589f-6f6a-4921-bc7e-70e1b62979f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2da2589f-6f6a-4921-bc7e-70e1b62979f1" (UID: "2da2589f-6f6a-4921-bc7e-70e1b62979f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:32 crc kubenswrapper[4714]: I0129 16:30:31.912259 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2da2589f-6f6a-4921-bc7e-70e1b62979f1-kube-api-access-56bph" (OuterVolumeSpecName: "kube-api-access-56bph") pod "2da2589f-6f6a-4921-bc7e-70e1b62979f1" (UID: "2da2589f-6f6a-4921-bc7e-70e1b62979f1"). InnerVolumeSpecName "kube-api-access-56bph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:32 crc kubenswrapper[4714]: I0129 16:30:32.001447 4714 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2da2589f-6f6a-4921-bc7e-70e1b62979f1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:32 crc kubenswrapper[4714]: I0129 16:30:32.001529 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56bph\" (UniqueName: \"kubernetes.io/projected/2da2589f-6f6a-4921-bc7e-70e1b62979f1-kube-api-access-56bph\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:32 crc kubenswrapper[4714]: I0129 16:30:32.196261 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27df3c21-5ecb-4af0-9e48-a40f826dc75d" path="/var/lib/kubelet/pods/27df3c21-5ecb-4af0-9e48-a40f826dc75d/volumes" Jan 29 16:30:32 crc kubenswrapper[4714]: I0129 16:30:32.520012 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder6e48-account-delete-8kw8k" event={"ID":"2da2589f-6f6a-4921-bc7e-70e1b62979f1","Type":"ContainerDied","Data":"47333c12ea7c2648d34e992cfdd83e276361abff785531f101ec9270b9603aa3"} Jan 29 16:30:32 crc kubenswrapper[4714]: I0129 16:30:32.520073 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47333c12ea7c2648d34e992cfdd83e276361abff785531f101ec9270b9603aa3" Jan 29 16:30:32 crc kubenswrapper[4714]: I0129 16:30:32.520155 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder6e48-account-delete-8kw8k" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.527679 4714 generic.go:334] "Generic (PLEG): container finished" podID="7c2fde86-e7c8-4605-9750-8464ca4b7d58" containerID="2caedfb03564966ae2d6b87961e4e7d74fd983fe0c16c41eb96e2d6ca6ebd267" exitCode=0 Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.527786 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"7c2fde86-e7c8-4605-9750-8464ca4b7d58","Type":"ContainerDied","Data":"2caedfb03564966ae2d6b87961e4e7d74fd983fe0c16c41eb96e2d6ca6ebd267"} Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.529719 4714 generic.go:334] "Generic (PLEG): container finished" podID="130b46c6-e7e5-4202-bea4-1214ec4766e8" containerID="bccd615573c7599b167665a1108953cb3d1759c6fb3d978bae9a4b7e75fbc11a" exitCode=0 Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.529758 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"130b46c6-e7e5-4202-bea4-1214ec4766e8","Type":"ContainerDied","Data":"bccd615573c7599b167665a1108953cb3d1759c6fb3d978bae9a4b7e75fbc11a"} Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.651798 4714 prober.go:107] "Probe failed" probeType="Readiness" pod="cinder-kuttl-tests/cinder-api-0" podUID="93eb21c7-d0f9-4648-a671-03d3ccd28429" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.109:8776/healthcheck\": read tcp 10.217.0.2:56774->10.217.0.109:8776: read: connection reset by peer" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.659571 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.727647 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c2fde86-e7c8-4605-9750-8464ca4b7d58-config-data\") pod \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.727681 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c2fde86-e7c8-4605-9750-8464ca4b7d58-combined-ca-bundle\") pod \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.727716 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-lib-modules\") pod \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.727740 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-dev\") pod \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.727761 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-sys\") pod \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.727801 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-etc-iscsi\") pod \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.727882 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "7c2fde86-e7c8-4605-9750-8464ca4b7d58" (UID: "7c2fde86-e7c8-4605-9750-8464ca4b7d58"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.731547 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-var-locks-brick\") pod \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.731880 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-run\") pod \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.731913 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl9k4\" (UniqueName: \"kubernetes.io/projected/7c2fde86-e7c8-4605-9750-8464ca4b7d58-kube-api-access-bl9k4\") pod \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.731634 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-dev" (OuterVolumeSpecName: "dev") pod "7c2fde86-e7c8-4605-9750-8464ca4b7d58" (UID: "7c2fde86-e7c8-4605-9750-8464ca4b7d58"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.731669 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-sys" (OuterVolumeSpecName: "sys") pod "7c2fde86-e7c8-4605-9750-8464ca4b7d58" (UID: "7c2fde86-e7c8-4605-9750-8464ca4b7d58"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.731690 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "7c2fde86-e7c8-4605-9750-8464ca4b7d58" (UID: "7c2fde86-e7c8-4605-9750-8464ca4b7d58"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.731990 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-run" (OuterVolumeSpecName: "run") pod "7c2fde86-e7c8-4605-9750-8464ca4b7d58" (UID: "7c2fde86-e7c8-4605-9750-8464ca4b7d58"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.731741 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7c2fde86-e7c8-4605-9750-8464ca4b7d58" (UID: "7c2fde86-e7c8-4605-9750-8464ca4b7d58"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.731951 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c2fde86-e7c8-4605-9750-8464ca4b7d58-scripts\") pod \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.732089 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-etc-machine-id\") pod \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.732132 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c2fde86-e7c8-4605-9750-8464ca4b7d58-config-data-custom\") pod \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.732157 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-var-locks-cinder\") pod \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.732202 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-etc-nvme\") pod \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.732230 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-var-lib-cinder\") pod \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\" (UID: \"7c2fde86-e7c8-4605-9750-8464ca4b7d58\") " Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.732690 4714 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.732707 4714 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-dev\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.732717 4714 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-sys\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.732726 4714 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.732734 4714 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.732743 4714 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-run\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.732771 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "7c2fde86-e7c8-4605-9750-8464ca4b7d58" (UID: "7c2fde86-e7c8-4605-9750-8464ca4b7d58"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.732791 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7c2fde86-e7c8-4605-9750-8464ca4b7d58" (UID: "7c2fde86-e7c8-4605-9750-8464ca4b7d58"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.733173 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "7c2fde86-e7c8-4605-9750-8464ca4b7d58" (UID: "7c2fde86-e7c8-4605-9750-8464ca4b7d58"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.733204 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "7c2fde86-e7c8-4605-9750-8464ca4b7d58" (UID: "7c2fde86-e7c8-4605-9750-8464ca4b7d58"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.735508 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c2fde86-e7c8-4605-9750-8464ca4b7d58-kube-api-access-bl9k4" (OuterVolumeSpecName: "kube-api-access-bl9k4") pod "7c2fde86-e7c8-4605-9750-8464ca4b7d58" (UID: "7c2fde86-e7c8-4605-9750-8464ca4b7d58"). InnerVolumeSpecName "kube-api-access-bl9k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.735695 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c2fde86-e7c8-4605-9750-8464ca4b7d58-scripts" (OuterVolumeSpecName: "scripts") pod "7c2fde86-e7c8-4605-9750-8464ca4b7d58" (UID: "7c2fde86-e7c8-4605-9750-8464ca4b7d58"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.736087 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c2fde86-e7c8-4605-9750-8464ca4b7d58-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7c2fde86-e7c8-4605-9750-8464ca4b7d58" (UID: "7c2fde86-e7c8-4605-9750-8464ca4b7d58"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.753820 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.774508 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c2fde86-e7c8-4605-9750-8464ca4b7d58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c2fde86-e7c8-4605-9750-8464ca4b7d58" (UID: "7c2fde86-e7c8-4605-9750-8464ca4b7d58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.799140 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c2fde86-e7c8-4605-9750-8464ca4b7d58-config-data" (OuterVolumeSpecName: "config-data") pod "7c2fde86-e7c8-4605-9750-8464ca4b7d58" (UID: "7c2fde86-e7c8-4605-9750-8464ca4b7d58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.834072 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/130b46c6-e7e5-4202-bea4-1214ec4766e8-config-data-custom\") pod \"130b46c6-e7e5-4202-bea4-1214ec4766e8\" (UID: \"130b46c6-e7e5-4202-bea4-1214ec4766e8\") " Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.834175 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/130b46c6-e7e5-4202-bea4-1214ec4766e8-scripts\") pod \"130b46c6-e7e5-4202-bea4-1214ec4766e8\" (UID: \"130b46c6-e7e5-4202-bea4-1214ec4766e8\") " Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.834266 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/130b46c6-e7e5-4202-bea4-1214ec4766e8-combined-ca-bundle\") pod \"130b46c6-e7e5-4202-bea4-1214ec4766e8\" (UID: \"130b46c6-e7e5-4202-bea4-1214ec4766e8\") " Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.834306 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/130b46c6-e7e5-4202-bea4-1214ec4766e8-config-data\") pod \"130b46c6-e7e5-4202-bea4-1214ec4766e8\" (UID: \"130b46c6-e7e5-4202-bea4-1214ec4766e8\") " Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.834334 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85zz9\" (UniqueName: \"kubernetes.io/projected/130b46c6-e7e5-4202-bea4-1214ec4766e8-kube-api-access-85zz9\") pod \"130b46c6-e7e5-4202-bea4-1214ec4766e8\" (UID: \"130b46c6-e7e5-4202-bea4-1214ec4766e8\") " Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.834358 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/130b46c6-e7e5-4202-bea4-1214ec4766e8-etc-machine-id\") pod \"130b46c6-e7e5-4202-bea4-1214ec4766e8\" (UID: \"130b46c6-e7e5-4202-bea4-1214ec4766e8\") " Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.834777 4714 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c2fde86-e7c8-4605-9750-8464ca4b7d58-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.834796 4714 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.834806 4714 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.834815 4714 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.834824 4714 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c2fde86-e7c8-4605-9750-8464ca4b7d58-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.834833 4714 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c2fde86-e7c8-4605-9750-8464ca4b7d58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.834844 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bl9k4\" (UniqueName: \"kubernetes.io/projected/7c2fde86-e7c8-4605-9750-8464ca4b7d58-kube-api-access-bl9k4\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.834854 4714 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c2fde86-e7c8-4605-9750-8464ca4b7d58-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.834863 4714 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c2fde86-e7c8-4605-9750-8464ca4b7d58-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.835269 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/130b46c6-e7e5-4202-bea4-1214ec4766e8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "130b46c6-e7e5-4202-bea4-1214ec4766e8" (UID: "130b46c6-e7e5-4202-bea4-1214ec4766e8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.838274 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/130b46c6-e7e5-4202-bea4-1214ec4766e8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "130b46c6-e7e5-4202-bea4-1214ec4766e8" (UID: "130b46c6-e7e5-4202-bea4-1214ec4766e8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.838373 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/130b46c6-e7e5-4202-bea4-1214ec4766e8-scripts" (OuterVolumeSpecName: "scripts") pod "130b46c6-e7e5-4202-bea4-1214ec4766e8" (UID: "130b46c6-e7e5-4202-bea4-1214ec4766e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.840535 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/130b46c6-e7e5-4202-bea4-1214ec4766e8-kube-api-access-85zz9" (OuterVolumeSpecName: "kube-api-access-85zz9") pod "130b46c6-e7e5-4202-bea4-1214ec4766e8" (UID: "130b46c6-e7e5-4202-bea4-1214ec4766e8"). InnerVolumeSpecName "kube-api-access-85zz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.880625 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/130b46c6-e7e5-4202-bea4-1214ec4766e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "130b46c6-e7e5-4202-bea4-1214ec4766e8" (UID: "130b46c6-e7e5-4202-bea4-1214ec4766e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.901089 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/130b46c6-e7e5-4202-bea4-1214ec4766e8-config-data" (OuterVolumeSpecName: "config-data") pod "130b46c6-e7e5-4202-bea4-1214ec4766e8" (UID: "130b46c6-e7e5-4202-bea4-1214ec4766e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.935835 4714 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/130b46c6-e7e5-4202-bea4-1214ec4766e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.935868 4714 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/130b46c6-e7e5-4202-bea4-1214ec4766e8-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.935877 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85zz9\" (UniqueName: \"kubernetes.io/projected/130b46c6-e7e5-4202-bea4-1214ec4766e8-kube-api-access-85zz9\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.935889 4714 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/130b46c6-e7e5-4202-bea4-1214ec4766e8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.935898 4714 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/130b46c6-e7e5-4202-bea4-1214ec4766e8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.935906 4714 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/130b46c6-e7e5-4202-bea4-1214ec4766e8-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4714]: I0129 16:30:33.994314 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.037207 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-combined-ca-bundle\") pod \"93eb21c7-d0f9-4648-a671-03d3ccd28429\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.037277 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-scripts\") pod \"93eb21c7-d0f9-4648-a671-03d3ccd28429\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.037332 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93eb21c7-d0f9-4648-a671-03d3ccd28429-logs\") pod \"93eb21c7-d0f9-4648-a671-03d3ccd28429\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.037379 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/93eb21c7-d0f9-4648-a671-03d3ccd28429-etc-machine-id\") pod \"93eb21c7-d0f9-4648-a671-03d3ccd28429\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.037402 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-internal-tls-certs\") pod \"93eb21c7-d0f9-4648-a671-03d3ccd28429\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.037482 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-str9z\" (UniqueName: \"kubernetes.io/projected/93eb21c7-d0f9-4648-a671-03d3ccd28429-kube-api-access-str9z\") pod \"93eb21c7-d0f9-4648-a671-03d3ccd28429\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.037473 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93eb21c7-d0f9-4648-a671-03d3ccd28429-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "93eb21c7-d0f9-4648-a671-03d3ccd28429" (UID: "93eb21c7-d0f9-4648-a671-03d3ccd28429"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.038277 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93eb21c7-d0f9-4648-a671-03d3ccd28429-logs" (OuterVolumeSpecName: "logs") pod "93eb21c7-d0f9-4648-a671-03d3ccd28429" (UID: "93eb21c7-d0f9-4648-a671-03d3ccd28429"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.038296 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-public-tls-certs\") pod \"93eb21c7-d0f9-4648-a671-03d3ccd28429\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.038437 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-config-data\") pod \"93eb21c7-d0f9-4648-a671-03d3ccd28429\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.038518 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-config-data-custom\") pod \"93eb21c7-d0f9-4648-a671-03d3ccd28429\" (UID: \"93eb21c7-d0f9-4648-a671-03d3ccd28429\") " Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.039119 4714 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93eb21c7-d0f9-4648-a671-03d3ccd28429-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.039147 4714 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/93eb21c7-d0f9-4648-a671-03d3ccd28429-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.040617 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93eb21c7-d0f9-4648-a671-03d3ccd28429-kube-api-access-str9z" (OuterVolumeSpecName: "kube-api-access-str9z") pod "93eb21c7-d0f9-4648-a671-03d3ccd28429" (UID: "93eb21c7-d0f9-4648-a671-03d3ccd28429"). InnerVolumeSpecName "kube-api-access-str9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.041405 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-scripts" (OuterVolumeSpecName: "scripts") pod "93eb21c7-d0f9-4648-a671-03d3ccd28429" (UID: "93eb21c7-d0f9-4648-a671-03d3ccd28429"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.041510 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "93eb21c7-d0f9-4648-a671-03d3ccd28429" (UID: "93eb21c7-d0f9-4648-a671-03d3ccd28429"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.058174 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93eb21c7-d0f9-4648-a671-03d3ccd28429" (UID: "93eb21c7-d0f9-4648-a671-03d3ccd28429"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.073106 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-config-data" (OuterVolumeSpecName: "config-data") pod "93eb21c7-d0f9-4648-a671-03d3ccd28429" (UID: "93eb21c7-d0f9-4648-a671-03d3ccd28429"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.075252 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "93eb21c7-d0f9-4648-a671-03d3ccd28429" (UID: "93eb21c7-d0f9-4648-a671-03d3ccd28429"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.079393 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "93eb21c7-d0f9-4648-a671-03d3ccd28429" (UID: "93eb21c7-d0f9-4648-a671-03d3ccd28429"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.140674 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-str9z\" (UniqueName: \"kubernetes.io/projected/93eb21c7-d0f9-4648-a671-03d3ccd28429-kube-api-access-str9z\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.140712 4714 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.140722 4714 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.140732 4714 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.140740 4714 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.140748 4714 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.140757 4714 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93eb21c7-d0f9-4648-a671-03d3ccd28429-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.244145 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-hqbqv"] Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.251890 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-hqbqv"] Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.270392 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder6e48-account-delete-8kw8k"] Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.283314 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-6e48-account-create-update-qff9j"] Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.295713 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-6e48-account-create-update-qff9j"] Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.307985 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder6e48-account-delete-8kw8k"] Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.541869 4714 generic.go:334] "Generic (PLEG): container finished" podID="93eb21c7-d0f9-4648-a671-03d3ccd28429" containerID="0c9e8f216ea8d08fef702164fee1c489d59f45d35804acfb1d8d5c8a08af65d1" exitCode=0 Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.541919 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"93eb21c7-d0f9-4648-a671-03d3ccd28429","Type":"ContainerDied","Data":"0c9e8f216ea8d08fef702164fee1c489d59f45d35804acfb1d8d5c8a08af65d1"} Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.541973 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.542011 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"93eb21c7-d0f9-4648-a671-03d3ccd28429","Type":"ContainerDied","Data":"9cf9a3f95739dab5cf604ede39f2741a3a785d75007c516172ae75ff6b3665b1"} Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.542050 4714 scope.go:117] "RemoveContainer" containerID="0c9e8f216ea8d08fef702164fee1c489d59f45d35804acfb1d8d5c8a08af65d1" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.545549 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"7c2fde86-e7c8-4605-9750-8464ca4b7d58","Type":"ContainerDied","Data":"e25204d76e2148fee59f9d4ff12c3c8594f24cfcc42364e257052da501eeec8a"} Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.545563 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.549386 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"130b46c6-e7e5-4202-bea4-1214ec4766e8","Type":"ContainerDied","Data":"4117860f9b4009b5187f645af44ec4d5a39f7166f668190572db33d3a8f8e24c"} Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.549505 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.566440 4714 scope.go:117] "RemoveContainer" containerID="b54fd3f2968e79a20a0b8cb352bdc32d69a2daf77ebdec8342b8fb48a253ab80" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.577590 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.584468 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.587984 4714 scope.go:117] "RemoveContainer" containerID="0c9e8f216ea8d08fef702164fee1c489d59f45d35804acfb1d8d5c8a08af65d1" Jan 29 16:30:34 crc kubenswrapper[4714]: E0129 16:30:34.588492 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c9e8f216ea8d08fef702164fee1c489d59f45d35804acfb1d8d5c8a08af65d1\": container with ID starting with 0c9e8f216ea8d08fef702164fee1c489d59f45d35804acfb1d8d5c8a08af65d1 not found: ID does not exist" containerID="0c9e8f216ea8d08fef702164fee1c489d59f45d35804acfb1d8d5c8a08af65d1" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.588541 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c9e8f216ea8d08fef702164fee1c489d59f45d35804acfb1d8d5c8a08af65d1"} err="failed to get container status \"0c9e8f216ea8d08fef702164fee1c489d59f45d35804acfb1d8d5c8a08af65d1\": rpc error: code = NotFound desc = could not find container \"0c9e8f216ea8d08fef702164fee1c489d59f45d35804acfb1d8d5c8a08af65d1\": container with ID starting with 0c9e8f216ea8d08fef702164fee1c489d59f45d35804acfb1d8d5c8a08af65d1 not found: ID does not exist" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.588580 4714 scope.go:117] "RemoveContainer" containerID="b54fd3f2968e79a20a0b8cb352bdc32d69a2daf77ebdec8342b8fb48a253ab80" Jan 29 16:30:34 crc kubenswrapper[4714]: E0129 16:30:34.588873 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b54fd3f2968e79a20a0b8cb352bdc32d69a2daf77ebdec8342b8fb48a253ab80\": container with ID starting with b54fd3f2968e79a20a0b8cb352bdc32d69a2daf77ebdec8342b8fb48a253ab80 not found: ID does not exist" containerID="b54fd3f2968e79a20a0b8cb352bdc32d69a2daf77ebdec8342b8fb48a253ab80" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.588902 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b54fd3f2968e79a20a0b8cb352bdc32d69a2daf77ebdec8342b8fb48a253ab80"} err="failed to get container status \"b54fd3f2968e79a20a0b8cb352bdc32d69a2daf77ebdec8342b8fb48a253ab80\": rpc error: code = NotFound desc = could not find container \"b54fd3f2968e79a20a0b8cb352bdc32d69a2daf77ebdec8342b8fb48a253ab80\": container with ID starting with b54fd3f2968e79a20a0b8cb352bdc32d69a2daf77ebdec8342b8fb48a253ab80 not found: ID does not exist" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.588920 4714 scope.go:117] "RemoveContainer" containerID="c236db1e5dbb8d372959347da33330e9e06dab48278781aa11dc0c6a6f372af8" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.597053 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.601515 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.606351 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.608898 4714 scope.go:117] "RemoveContainer" containerID="2caedfb03564966ae2d6b87961e4e7d74fd983fe0c16c41eb96e2d6ca6ebd267" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.610770 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.624001 4714 scope.go:117] "RemoveContainer" containerID="bbf7723cee6103ad631b2d89beaf6a72a626280c58f32453fc92b4acddbf7202" Jan 29 16:30:34 crc kubenswrapper[4714]: I0129 16:30:34.637769 4714 scope.go:117] "RemoveContainer" containerID="bccd615573c7599b167665a1108953cb3d1759c6fb3d978bae9a4b7e75fbc11a" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.673844 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone-db-sync-6xxc6"] Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.683898 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/keystone-db-sync-6xxc6"] Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.690190 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone-bootstrap-kj27d"] Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.697713 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/keystone-bootstrap-kj27d"] Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.704029 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone-db9b49999-6gd95"] Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.704276 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/keystone-db9b49999-6gd95" podUID="fc06a535-6f60-438e-b52d-5dc90fae8c67" containerName="keystone-api" containerID="cri-o://4adeb441736bb7d27c549844b9147979d27f5dbed7e42a9b990619d275a00000" gracePeriod=30 Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.739322 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/keystone52f2-account-delete-qxsl4"] Jan 29 16:30:35 crc kubenswrapper[4714]: E0129 16:30:35.739557 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da2589f-6f6a-4921-bc7e-70e1b62979f1" containerName="mariadb-account-delete" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.739571 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da2589f-6f6a-4921-bc7e-70e1b62979f1" containerName="mariadb-account-delete" Jan 29 16:30:35 crc kubenswrapper[4714]: E0129 16:30:35.739583 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27df3c21-5ecb-4af0-9e48-a40f826dc75d" containerName="probe" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.739589 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="27df3c21-5ecb-4af0-9e48-a40f826dc75d" containerName="probe" Jan 29 16:30:35 crc kubenswrapper[4714]: E0129 16:30:35.739597 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93eb21c7-d0f9-4648-a671-03d3ccd28429" containerName="cinder-api" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.739602 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="93eb21c7-d0f9-4648-a671-03d3ccd28429" containerName="cinder-api" Jan 29 16:30:35 crc kubenswrapper[4714]: E0129 16:30:35.739612 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93eb21c7-d0f9-4648-a671-03d3ccd28429" containerName="cinder-api-log" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.739618 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="93eb21c7-d0f9-4648-a671-03d3ccd28429" containerName="cinder-api-log" Jan 29 16:30:35 crc kubenswrapper[4714]: E0129 16:30:35.739629 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27df3c21-5ecb-4af0-9e48-a40f826dc75d" containerName="cinder-volume" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.739635 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="27df3c21-5ecb-4af0-9e48-a40f826dc75d" containerName="cinder-volume" Jan 29 16:30:35 crc kubenswrapper[4714]: E0129 16:30:35.739647 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27df3c21-5ecb-4af0-9e48-a40f826dc75d" containerName="probe" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.739653 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="27df3c21-5ecb-4af0-9e48-a40f826dc75d" containerName="probe" Jan 29 16:30:35 crc kubenswrapper[4714]: E0129 16:30:35.739661 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c2fde86-e7c8-4605-9750-8464ca4b7d58" containerName="cinder-backup" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.739666 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c2fde86-e7c8-4605-9750-8464ca4b7d58" containerName="cinder-backup" Jan 29 16:30:35 crc kubenswrapper[4714]: E0129 16:30:35.739672 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="130b46c6-e7e5-4202-bea4-1214ec4766e8" containerName="probe" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.739678 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="130b46c6-e7e5-4202-bea4-1214ec4766e8" containerName="probe" Jan 29 16:30:35 crc kubenswrapper[4714]: E0129 16:30:35.739686 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="130b46c6-e7e5-4202-bea4-1214ec4766e8" containerName="cinder-scheduler" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.739693 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="130b46c6-e7e5-4202-bea4-1214ec4766e8" containerName="cinder-scheduler" Jan 29 16:30:35 crc kubenswrapper[4714]: E0129 16:30:35.739701 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c2fde86-e7c8-4605-9750-8464ca4b7d58" containerName="probe" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.739707 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c2fde86-e7c8-4605-9750-8464ca4b7d58" containerName="probe" Jan 29 16:30:35 crc kubenswrapper[4714]: E0129 16:30:35.739714 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27df3c21-5ecb-4af0-9e48-a40f826dc75d" containerName="cinder-volume" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.739722 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="27df3c21-5ecb-4af0-9e48-a40f826dc75d" containerName="cinder-volume" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.739817 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="27df3c21-5ecb-4af0-9e48-a40f826dc75d" containerName="probe" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.739826 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="93eb21c7-d0f9-4648-a671-03d3ccd28429" containerName="cinder-api-log" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.739835 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="93eb21c7-d0f9-4648-a671-03d3ccd28429" containerName="cinder-api" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.739841 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c2fde86-e7c8-4605-9750-8464ca4b7d58" containerName="cinder-backup" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.739852 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="27df3c21-5ecb-4af0-9e48-a40f826dc75d" containerName="cinder-volume" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.739859 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c2fde86-e7c8-4605-9750-8464ca4b7d58" containerName="probe" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.739866 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="130b46c6-e7e5-4202-bea4-1214ec4766e8" containerName="probe" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.739875 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="2da2589f-6f6a-4921-bc7e-70e1b62979f1" containerName="mariadb-account-delete" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.739882 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="27df3c21-5ecb-4af0-9e48-a40f826dc75d" containerName="cinder-volume" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.739890 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="130b46c6-e7e5-4202-bea4-1214ec4766e8" containerName="cinder-scheduler" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.740500 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone52f2-account-delete-qxsl4" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.751115 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone52f2-account-delete-qxsl4"] Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.869550 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/112a1cde-5990-4140-97dd-c2bbc4f73197-operator-scripts\") pod \"keystone52f2-account-delete-qxsl4\" (UID: \"112a1cde-5990-4140-97dd-c2bbc4f73197\") " pod="cinder-kuttl-tests/keystone52f2-account-delete-qxsl4" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.869615 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrfh5\" (UniqueName: \"kubernetes.io/projected/112a1cde-5990-4140-97dd-c2bbc4f73197-kube-api-access-rrfh5\") pod \"keystone52f2-account-delete-qxsl4\" (UID: \"112a1cde-5990-4140-97dd-c2bbc4f73197\") " pod="cinder-kuttl-tests/keystone52f2-account-delete-qxsl4" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.971410 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/112a1cde-5990-4140-97dd-c2bbc4f73197-operator-scripts\") pod \"keystone52f2-account-delete-qxsl4\" (UID: \"112a1cde-5990-4140-97dd-c2bbc4f73197\") " pod="cinder-kuttl-tests/keystone52f2-account-delete-qxsl4" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.971468 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrfh5\" (UniqueName: \"kubernetes.io/projected/112a1cde-5990-4140-97dd-c2bbc4f73197-kube-api-access-rrfh5\") pod \"keystone52f2-account-delete-qxsl4\" (UID: \"112a1cde-5990-4140-97dd-c2bbc4f73197\") " pod="cinder-kuttl-tests/keystone52f2-account-delete-qxsl4" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.972254 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/112a1cde-5990-4140-97dd-c2bbc4f73197-operator-scripts\") pod \"keystone52f2-account-delete-qxsl4\" (UID: \"112a1cde-5990-4140-97dd-c2bbc4f73197\") " pod="cinder-kuttl-tests/keystone52f2-account-delete-qxsl4" Jan 29 16:30:35 crc kubenswrapper[4714]: I0129 16:30:35.995315 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrfh5\" (UniqueName: \"kubernetes.io/projected/112a1cde-5990-4140-97dd-c2bbc4f73197-kube-api-access-rrfh5\") pod \"keystone52f2-account-delete-qxsl4\" (UID: \"112a1cde-5990-4140-97dd-c2bbc4f73197\") " pod="cinder-kuttl-tests/keystone52f2-account-delete-qxsl4" Jan 29 16:30:36 crc kubenswrapper[4714]: I0129 16:30:36.065893 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone52f2-account-delete-qxsl4" Jan 29 16:30:36 crc kubenswrapper[4714]: I0129 16:30:36.196576 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="130b46c6-e7e5-4202-bea4-1214ec4766e8" path="/var/lib/kubelet/pods/130b46c6-e7e5-4202-bea4-1214ec4766e8/volumes" Jan 29 16:30:36 crc kubenswrapper[4714]: I0129 16:30:36.198191 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2da2589f-6f6a-4921-bc7e-70e1b62979f1" path="/var/lib/kubelet/pods/2da2589f-6f6a-4921-bc7e-70e1b62979f1/volumes" Jan 29 16:30:36 crc kubenswrapper[4714]: I0129 16:30:36.199329 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e9c54b1-972a-4807-90af-f94a884002bd" path="/var/lib/kubelet/pods/5e9c54b1-972a-4807-90af-f94a884002bd/volumes" Jan 29 16:30:36 crc kubenswrapper[4714]: I0129 16:30:36.201149 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e33ba3d-9561-441b-b835-fbdb6ce97d23" path="/var/lib/kubelet/pods/6e33ba3d-9561-441b-b835-fbdb6ce97d23/volumes" Jan 29 16:30:36 crc kubenswrapper[4714]: I0129 16:30:36.202312 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c2fde86-e7c8-4605-9750-8464ca4b7d58" path="/var/lib/kubelet/pods/7c2fde86-e7c8-4605-9750-8464ca4b7d58/volumes" Jan 29 16:30:36 crc kubenswrapper[4714]: I0129 16:30:36.203510 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93eb21c7-d0f9-4648-a671-03d3ccd28429" path="/var/lib/kubelet/pods/93eb21c7-d0f9-4648-a671-03d3ccd28429/volumes" Jan 29 16:30:36 crc kubenswrapper[4714]: I0129 16:30:36.205104 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3306850-8843-48e1-b203-7f52de72682f" path="/var/lib/kubelet/pods/c3306850-8843-48e1-b203-7f52de72682f/volumes" Jan 29 16:30:36 crc kubenswrapper[4714]: I0129 16:30:36.205778 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3e03982-d953-488f-a01a-5024f64ad7da" path="/var/lib/kubelet/pods/f3e03982-d953-488f-a01a-5024f64ad7da/volumes" Jan 29 16:30:36 crc kubenswrapper[4714]: I0129 16:30:36.501871 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone52f2-account-delete-qxsl4"] Jan 29 16:30:36 crc kubenswrapper[4714]: W0129 16:30:36.511038 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod112a1cde_5990_4140_97dd_c2bbc4f73197.slice/crio-886af71ec9bb3cd0c3ff50ec50762de0bc190d913b4893fee8dbe7877dbfc19b WatchSource:0}: Error finding container 886af71ec9bb3cd0c3ff50ec50762de0bc190d913b4893fee8dbe7877dbfc19b: Status 404 returned error can't find the container with id 886af71ec9bb3cd0c3ff50ec50762de0bc190d913b4893fee8dbe7877dbfc19b Jan 29 16:30:36 crc kubenswrapper[4714]: I0129 16:30:36.581033 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-2fh2r"] Jan 29 16:30:36 crc kubenswrapper[4714]: I0129 16:30:36.618503 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-2fh2r"] Jan 29 16:30:36 crc kubenswrapper[4714]: I0129 16:30:36.620361 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone52f2-account-delete-qxsl4" event={"ID":"112a1cde-5990-4140-97dd-c2bbc4f73197","Type":"ContainerStarted","Data":"886af71ec9bb3cd0c3ff50ec50762de0bc190d913b4893fee8dbe7877dbfc19b"} Jan 29 16:30:36 crc kubenswrapper[4714]: I0129 16:30:36.632103 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/root-account-create-update-jg6sh"] Jan 29 16:30:36 crc kubenswrapper[4714]: I0129 16:30:36.632527 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="27df3c21-5ecb-4af0-9e48-a40f826dc75d" containerName="probe" Jan 29 16:30:36 crc kubenswrapper[4714]: I0129 16:30:36.633049 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-jg6sh" Jan 29 16:30:36 crc kubenswrapper[4714]: I0129 16:30:36.636061 4714 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"openstack-mariadb-root-db-secret" Jan 29 16:30:36 crc kubenswrapper[4714]: I0129 16:30:36.638979 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/openstack-galera-2"] Jan 29 16:30:36 crc kubenswrapper[4714]: I0129 16:30:36.660561 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/openstack-galera-0"] Jan 29 16:30:36 crc kubenswrapper[4714]: I0129 16:30:36.665919 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/openstack-galera-1"] Jan 29 16:30:36 crc kubenswrapper[4714]: I0129 16:30:36.671082 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-jg6sh"] Jan 29 16:30:36 crc kubenswrapper[4714]: I0129 16:30:36.684783 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc4cd\" (UniqueName: \"kubernetes.io/projected/4a2536dd-4258-4b5b-863c-a76431c992ee-kube-api-access-rc4cd\") pod \"root-account-create-update-jg6sh\" (UID: \"4a2536dd-4258-4b5b-863c-a76431c992ee\") " pod="cinder-kuttl-tests/root-account-create-update-jg6sh" Jan 29 16:30:36 crc kubenswrapper[4714]: I0129 16:30:36.684832 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a2536dd-4258-4b5b-863c-a76431c992ee-operator-scripts\") pod \"root-account-create-update-jg6sh\" (UID: \"4a2536dd-4258-4b5b-863c-a76431c992ee\") " pod="cinder-kuttl-tests/root-account-create-update-jg6sh" Jan 29 16:30:36 crc kubenswrapper[4714]: I0129 16:30:36.687463 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-jg6sh"] Jan 29 16:30:36 crc kubenswrapper[4714]: E0129 16:30:36.687834 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-rc4cd operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="cinder-kuttl-tests/root-account-create-update-jg6sh" podUID="4a2536dd-4258-4b5b-863c-a76431c992ee" Jan 29 16:30:36 crc kubenswrapper[4714]: I0129 16:30:36.786616 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc4cd\" (UniqueName: \"kubernetes.io/projected/4a2536dd-4258-4b5b-863c-a76431c992ee-kube-api-access-rc4cd\") pod \"root-account-create-update-jg6sh\" (UID: \"4a2536dd-4258-4b5b-863c-a76431c992ee\") " pod="cinder-kuttl-tests/root-account-create-update-jg6sh" Jan 29 16:30:36 crc kubenswrapper[4714]: I0129 16:30:36.786670 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a2536dd-4258-4b5b-863c-a76431c992ee-operator-scripts\") pod \"root-account-create-update-jg6sh\" (UID: \"4a2536dd-4258-4b5b-863c-a76431c992ee\") " pod="cinder-kuttl-tests/root-account-create-update-jg6sh" Jan 29 16:30:36 crc kubenswrapper[4714]: E0129 16:30:36.786836 4714 configmap.go:193] Couldn't get configMap cinder-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 29 16:30:36 crc kubenswrapper[4714]: E0129 16:30:36.786914 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4a2536dd-4258-4b5b-863c-a76431c992ee-operator-scripts podName:4a2536dd-4258-4b5b-863c-a76431c992ee nodeName:}" failed. No retries permitted until 2026-01-29 16:30:37.286895536 +0000 UTC m=+1243.807396656 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4a2536dd-4258-4b5b-863c-a76431c992ee-operator-scripts") pod "root-account-create-update-jg6sh" (UID: "4a2536dd-4258-4b5b-863c-a76431c992ee") : configmap "openstack-scripts" not found Jan 29 16:30:36 crc kubenswrapper[4714]: I0129 16:30:36.787477 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/openstack-galera-2" podUID="e367e739-45d9-4c71-82fa-ecda02da3277" containerName="galera" containerID="cri-o://6653bc0073fbdde40468fb06b953c1302846763f2888418b4fc21509ef3915d2" gracePeriod=30 Jan 29 16:30:36 crc kubenswrapper[4714]: E0129 16:30:36.790611 4714 projected.go:194] Error preparing data for projected volume kube-api-access-rc4cd for pod cinder-kuttl-tests/root-account-create-update-jg6sh: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 16:30:36 crc kubenswrapper[4714]: E0129 16:30:36.790655 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4a2536dd-4258-4b5b-863c-a76431c992ee-kube-api-access-rc4cd podName:4a2536dd-4258-4b5b-863c-a76431c992ee nodeName:}" failed. No retries permitted until 2026-01-29 16:30:37.290644034 +0000 UTC m=+1243.811145154 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rc4cd" (UniqueName: "kubernetes.io/projected/4a2536dd-4258-4b5b-863c-a76431c992ee-kube-api-access-rc4cd") pod "root-account-create-update-jg6sh" (UID: "4a2536dd-4258-4b5b-863c-a76431c992ee") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 16:30:37 crc kubenswrapper[4714]: E0129 16:30:37.185750 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.190019 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/memcached-0"] Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.190335 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/memcached-0" podUID="d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea" containerName="memcached" containerID="cri-o://5244ecc09d3c74f3a195f77af7759f9d242093032f09d5208383b4619dda295d" gracePeriod=30 Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.294200 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc4cd\" (UniqueName: \"kubernetes.io/projected/4a2536dd-4258-4b5b-863c-a76431c992ee-kube-api-access-rc4cd\") pod \"root-account-create-update-jg6sh\" (UID: \"4a2536dd-4258-4b5b-863c-a76431c992ee\") " pod="cinder-kuttl-tests/root-account-create-update-jg6sh" Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.294277 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a2536dd-4258-4b5b-863c-a76431c992ee-operator-scripts\") pod \"root-account-create-update-jg6sh\" (UID: \"4a2536dd-4258-4b5b-863c-a76431c992ee\") " pod="cinder-kuttl-tests/root-account-create-update-jg6sh" Jan 29 16:30:37 crc kubenswrapper[4714]: E0129 16:30:37.295309 4714 configmap.go:193] Couldn't get configMap cinder-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 29 16:30:37 crc kubenswrapper[4714]: E0129 16:30:37.295365 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4a2536dd-4258-4b5b-863c-a76431c992ee-operator-scripts podName:4a2536dd-4258-4b5b-863c-a76431c992ee nodeName:}" failed. No retries permitted until 2026-01-29 16:30:38.295348663 +0000 UTC m=+1244.815849783 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4a2536dd-4258-4b5b-863c-a76431c992ee-operator-scripts") pod "root-account-create-update-jg6sh" (UID: "4a2536dd-4258-4b5b-863c-a76431c992ee") : configmap "openstack-scripts" not found Jan 29 16:30:37 crc kubenswrapper[4714]: E0129 16:30:37.299066 4714 projected.go:194] Error preparing data for projected volume kube-api-access-rc4cd for pod cinder-kuttl-tests/root-account-create-update-jg6sh: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 16:30:37 crc kubenswrapper[4714]: E0129 16:30:37.299170 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4a2536dd-4258-4b5b-863c-a76431c992ee-kube-api-access-rc4cd podName:4a2536dd-4258-4b5b-863c-a76431c992ee nodeName:}" failed. No retries permitted until 2026-01-29 16:30:38.299144241 +0000 UTC m=+1244.819645381 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rc4cd" (UniqueName: "kubernetes.io/projected/4a2536dd-4258-4b5b-863c-a76431c992ee-kube-api-access-rc4cd") pod "root-account-create-update-jg6sh" (UID: "4a2536dd-4258-4b5b-863c-a76431c992ee") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.601518 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-2" Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.607077 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.641159 4714 generic.go:334] "Generic (PLEG): container finished" podID="e367e739-45d9-4c71-82fa-ecda02da3277" containerID="6653bc0073fbdde40468fb06b953c1302846763f2888418b4fc21509ef3915d2" exitCode=0 Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.641220 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-2" event={"ID":"e367e739-45d9-4c71-82fa-ecda02da3277","Type":"ContainerDied","Data":"6653bc0073fbdde40468fb06b953c1302846763f2888418b4fc21509ef3915d2"} Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.641238 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-2" Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.641287 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-2" event={"ID":"e367e739-45d9-4c71-82fa-ecda02da3277","Type":"ContainerDied","Data":"9165416a79a8d14934c00fc8e00a91ffd697d205964c3585f55278b965651da9"} Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.641311 4714 scope.go:117] "RemoveContainer" containerID="6653bc0073fbdde40468fb06b953c1302846763f2888418b4fc21509ef3915d2" Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.646296 4714 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="cinder-kuttl-tests/keystone52f2-account-delete-qxsl4" secret="" err="secret \"galera-openstack-dockercfg-qdqvq\" not found" Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.646338 4714 scope.go:117] "RemoveContainer" containerID="b8914d707e1c8c5545d8b139524bf6fc2f7931a0ae5843dc9cc4a838b4c891ba" Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.645808 4714 generic.go:334] "Generic (PLEG): container finished" podID="112a1cde-5990-4140-97dd-c2bbc4f73197" containerID="b8914d707e1c8c5545d8b139524bf6fc2f7931a0ae5843dc9cc4a838b4c891ba" exitCode=1 Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.646864 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone52f2-account-delete-qxsl4" event={"ID":"112a1cde-5990-4140-97dd-c2bbc4f73197","Type":"ContainerDied","Data":"b8914d707e1c8c5545d8b139524bf6fc2f7931a0ae5843dc9cc4a838b4c891ba"} Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.646960 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-jg6sh" Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.656419 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-jg6sh" Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.671098 4714 scope.go:117] "RemoveContainer" containerID="ec556421e74871bfa9c85c53ff648a1a0691767027648a2904e427bc8f75360f" Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.698036 4714 scope.go:117] "RemoveContainer" containerID="6653bc0073fbdde40468fb06b953c1302846763f2888418b4fc21509ef3915d2" Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.698609 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e367e739-45d9-4c71-82fa-ecda02da3277-operator-scripts\") pod \"e367e739-45d9-4c71-82fa-ecda02da3277\" (UID: \"e367e739-45d9-4c71-82fa-ecda02da3277\") " Jan 29 16:30:37 crc kubenswrapper[4714]: E0129 16:30:37.698632 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6653bc0073fbdde40468fb06b953c1302846763f2888418b4fc21509ef3915d2\": container with ID starting with 6653bc0073fbdde40468fb06b953c1302846763f2888418b4fc21509ef3915d2 not found: ID does not exist" containerID="6653bc0073fbdde40468fb06b953c1302846763f2888418b4fc21509ef3915d2" Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.698697 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6653bc0073fbdde40468fb06b953c1302846763f2888418b4fc21509ef3915d2"} err="failed to get container status \"6653bc0073fbdde40468fb06b953c1302846763f2888418b4fc21509ef3915d2\": rpc error: code = NotFound desc = could not find container \"6653bc0073fbdde40468fb06b953c1302846763f2888418b4fc21509ef3915d2\": container with ID starting with 6653bc0073fbdde40468fb06b953c1302846763f2888418b4fc21509ef3915d2 not found: ID does not exist" Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.698648 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e367e739-45d9-4c71-82fa-ecda02da3277-kolla-config\") pod \"e367e739-45d9-4c71-82fa-ecda02da3277\" (UID: \"e367e739-45d9-4c71-82fa-ecda02da3277\") " Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.698832 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"e367e739-45d9-4c71-82fa-ecda02da3277\" (UID: \"e367e739-45d9-4c71-82fa-ecda02da3277\") " Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.699112 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e367e739-45d9-4c71-82fa-ecda02da3277-config-data-generated\") pod \"e367e739-45d9-4c71-82fa-ecda02da3277\" (UID: \"e367e739-45d9-4c71-82fa-ecda02da3277\") " Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.699151 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nmlf\" (UniqueName: \"kubernetes.io/projected/e367e739-45d9-4c71-82fa-ecda02da3277-kube-api-access-8nmlf\") pod \"e367e739-45d9-4c71-82fa-ecda02da3277\" (UID: \"e367e739-45d9-4c71-82fa-ecda02da3277\") " Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.699228 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e367e739-45d9-4c71-82fa-ecda02da3277-config-data-default\") pod \"e367e739-45d9-4c71-82fa-ecda02da3277\" (UID: \"e367e739-45d9-4c71-82fa-ecda02da3277\") " Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.699416 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e367e739-45d9-4c71-82fa-ecda02da3277-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "e367e739-45d9-4c71-82fa-ecda02da3277" (UID: "e367e739-45d9-4c71-82fa-ecda02da3277"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.699746 4714 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e367e739-45d9-4c71-82fa-ecda02da3277-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.699819 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e367e739-45d9-4c71-82fa-ecda02da3277-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e367e739-45d9-4c71-82fa-ecda02da3277" (UID: "e367e739-45d9-4c71-82fa-ecda02da3277"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.699841 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e367e739-45d9-4c71-82fa-ecda02da3277-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "e367e739-45d9-4c71-82fa-ecda02da3277" (UID: "e367e739-45d9-4c71-82fa-ecda02da3277"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:30:37 crc kubenswrapper[4714]: E0129 16:30:37.699919 4714 configmap.go:193] Couldn't get configMap cinder-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 29 16:30:37 crc kubenswrapper[4714]: E0129 16:30:37.699997 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/112a1cde-5990-4140-97dd-c2bbc4f73197-operator-scripts podName:112a1cde-5990-4140-97dd-c2bbc4f73197 nodeName:}" failed. No retries permitted until 2026-01-29 16:30:38.199973388 +0000 UTC m=+1244.720474718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/112a1cde-5990-4140-97dd-c2bbc4f73197-operator-scripts") pod "keystone52f2-account-delete-qxsl4" (UID: "112a1cde-5990-4140-97dd-c2bbc4f73197") : configmap "openstack-scripts" not found Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.700583 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e367e739-45d9-4c71-82fa-ecda02da3277-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "e367e739-45d9-4c71-82fa-ecda02da3277" (UID: "e367e739-45d9-4c71-82fa-ecda02da3277"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.700666 4714 scope.go:117] "RemoveContainer" containerID="ec556421e74871bfa9c85c53ff648a1a0691767027648a2904e427bc8f75360f" Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.706136 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e367e739-45d9-4c71-82fa-ecda02da3277-kube-api-access-8nmlf" (OuterVolumeSpecName: "kube-api-access-8nmlf") pod "e367e739-45d9-4c71-82fa-ecda02da3277" (UID: "e367e739-45d9-4c71-82fa-ecda02da3277"). InnerVolumeSpecName "kube-api-access-8nmlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.713448 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "mysql-db") pod "e367e739-45d9-4c71-82fa-ecda02da3277" (UID: "e367e739-45d9-4c71-82fa-ecda02da3277"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 16:30:37 crc kubenswrapper[4714]: E0129 16:30:37.714633 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec556421e74871bfa9c85c53ff648a1a0691767027648a2904e427bc8f75360f\": container with ID starting with ec556421e74871bfa9c85c53ff648a1a0691767027648a2904e427bc8f75360f not found: ID does not exist" containerID="ec556421e74871bfa9c85c53ff648a1a0691767027648a2904e427bc8f75360f" Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.714693 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec556421e74871bfa9c85c53ff648a1a0691767027648a2904e427bc8f75360f"} err="failed to get container status \"ec556421e74871bfa9c85c53ff648a1a0691767027648a2904e427bc8f75360f\": rpc error: code = NotFound desc = could not find container \"ec556421e74871bfa9c85c53ff648a1a0691767027648a2904e427bc8f75360f\": container with ID starting with ec556421e74871bfa9c85c53ff648a1a0691767027648a2904e427bc8f75360f not found: ID does not exist" Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.801009 4714 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e367e739-45d9-4c71-82fa-ecda02da3277-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.801036 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nmlf\" (UniqueName: \"kubernetes.io/projected/e367e739-45d9-4c71-82fa-ecda02da3277-kube-api-access-8nmlf\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.801045 4714 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e367e739-45d9-4c71-82fa-ecda02da3277-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.801057 4714 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e367e739-45d9-4c71-82fa-ecda02da3277-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.801090 4714 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.811685 4714 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.902152 4714 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.982001 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/openstack-galera-2"] Jan 29 16:30:37 crc kubenswrapper[4714]: I0129 16:30:37.997069 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/openstack-galera-2"] Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.001500 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.040344 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/rabbitmq-server-0" podUID="55e23ac1-a89b-4689-a17d-bee875f7783e" containerName="rabbitmq" containerID="cri-o://ee125ef7ce148229ad82b22d2a7578dd54b32a1f3c67998b7176056878a4676a" gracePeriod=604800 Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.191185 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c94e6f97-6224-46d2-b406-f5d02a596cb7" path="/var/lib/kubelet/pods/c94e6f97-6224-46d2-b406-f5d02a596cb7/volumes" Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.191719 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e367e739-45d9-4c71-82fa-ecda02da3277" path="/var/lib/kubelet/pods/e367e739-45d9-4c71-82fa-ecda02da3277/volumes" Jan 29 16:30:38 crc kubenswrapper[4714]: E0129 16:30:38.204775 4714 configmap.go:193] Couldn't get configMap cinder-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 29 16:30:38 crc kubenswrapper[4714]: E0129 16:30:38.205040 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/112a1cde-5990-4140-97dd-c2bbc4f73197-operator-scripts podName:112a1cde-5990-4140-97dd-c2bbc4f73197 nodeName:}" failed. No retries permitted until 2026-01-29 16:30:39.205025766 +0000 UTC m=+1245.725526886 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/112a1cde-5990-4140-97dd-c2bbc4f73197-operator-scripts") pod "keystone52f2-account-delete-qxsl4" (UID: "112a1cde-5990-4140-97dd-c2bbc4f73197") : configmap "openstack-scripts" not found Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.307810 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc4cd\" (UniqueName: \"kubernetes.io/projected/4a2536dd-4258-4b5b-863c-a76431c992ee-kube-api-access-rc4cd\") pod \"root-account-create-update-jg6sh\" (UID: \"4a2536dd-4258-4b5b-863c-a76431c992ee\") " pod="cinder-kuttl-tests/root-account-create-update-jg6sh" Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.307852 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a2536dd-4258-4b5b-863c-a76431c992ee-operator-scripts\") pod \"root-account-create-update-jg6sh\" (UID: \"4a2536dd-4258-4b5b-863c-a76431c992ee\") " pod="cinder-kuttl-tests/root-account-create-update-jg6sh" Jan 29 16:30:38 crc kubenswrapper[4714]: E0129 16:30:38.309599 4714 configmap.go:193] Couldn't get configMap cinder-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 29 16:30:38 crc kubenswrapper[4714]: E0129 16:30:38.309658 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4a2536dd-4258-4b5b-863c-a76431c992ee-operator-scripts podName:4a2536dd-4258-4b5b-863c-a76431c992ee nodeName:}" failed. No retries permitted until 2026-01-29 16:30:40.309645159 +0000 UTC m=+1246.830146279 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4a2536dd-4258-4b5b-863c-a76431c992ee-operator-scripts") pod "root-account-create-update-jg6sh" (UID: "4a2536dd-4258-4b5b-863c-a76431c992ee") : configmap "openstack-scripts" not found Jan 29 16:30:38 crc kubenswrapper[4714]: E0129 16:30:38.313206 4714 projected.go:194] Error preparing data for projected volume kube-api-access-rc4cd for pod cinder-kuttl-tests/root-account-create-update-jg6sh: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 16:30:38 crc kubenswrapper[4714]: E0129 16:30:38.313271 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4a2536dd-4258-4b5b-863c-a76431c992ee-kube-api-access-rc4cd podName:4a2536dd-4258-4b5b-863c-a76431c992ee nodeName:}" failed. No retries permitted until 2026-01-29 16:30:40.313260833 +0000 UTC m=+1246.833761953 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rc4cd" (UniqueName: "kubernetes.io/projected/4a2536dd-4258-4b5b-863c-a76431c992ee-kube-api-access-rc4cd") pod "root-account-create-update-jg6sh" (UID: "4a2536dd-4258-4b5b-863c-a76431c992ee") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.458370 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/memcached-0" Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.509016 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea-config-data\") pod \"d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea\" (UID: \"d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea\") " Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.509092 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea-kolla-config\") pod \"d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea\" (UID: \"d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea\") " Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.509159 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkmzp\" (UniqueName: \"kubernetes.io/projected/d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea-kube-api-access-vkmzp\") pod \"d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea\" (UID: \"d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea\") " Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.509804 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea-config-data" (OuterVolumeSpecName: "config-data") pod "d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea" (UID: "d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.510295 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea" (UID: "d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.514207 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea-kube-api-access-vkmzp" (OuterVolumeSpecName: "kube-api-access-vkmzp") pod "d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea" (UID: "d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea"). InnerVolumeSpecName "kube-api-access-vkmzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.610587 4714 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.610627 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkmzp\" (UniqueName: \"kubernetes.io/projected/d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea-kube-api-access-vkmzp\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.610640 4714 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.658777 4714 generic.go:334] "Generic (PLEG): container finished" podID="d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea" containerID="5244ecc09d3c74f3a195f77af7759f9d242093032f09d5208383b4619dda295d" exitCode=0 Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.658886 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/memcached-0" Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.658879 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/memcached-0" event={"ID":"d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea","Type":"ContainerDied","Data":"5244ecc09d3c74f3a195f77af7759f9d242093032f09d5208383b4619dda295d"} Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.658998 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/memcached-0" event={"ID":"d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea","Type":"ContainerDied","Data":"5fcd1b55c77976e4d94c390473639a6b02e3c4a2129659d96d1e68f62ca74a39"} Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.659092 4714 scope.go:117] "RemoveContainer" containerID="5244ecc09d3c74f3a195f77af7759f9d242093032f09d5208383b4619dda295d" Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.662549 4714 generic.go:334] "Generic (PLEG): container finished" podID="112a1cde-5990-4140-97dd-c2bbc4f73197" containerID="cf816b8f59cc18e67ef8edcaed9119f0334f92970ee8c49b8887e2bc061f93ab" exitCode=1 Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.662642 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-jg6sh" Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.662735 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone52f2-account-delete-qxsl4" event={"ID":"112a1cde-5990-4140-97dd-c2bbc4f73197","Type":"ContainerDied","Data":"cf816b8f59cc18e67ef8edcaed9119f0334f92970ee8c49b8887e2bc061f93ab"} Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.663501 4714 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="cinder-kuttl-tests/keystone52f2-account-delete-qxsl4" secret="" err="secret \"galera-openstack-dockercfg-qdqvq\" not found" Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.663559 4714 scope.go:117] "RemoveContainer" containerID="cf816b8f59cc18e67ef8edcaed9119f0334f92970ee8c49b8887e2bc061f93ab" Jan 29 16:30:38 crc kubenswrapper[4714]: E0129 16:30:38.664047 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-delete\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-delete pod=keystone52f2-account-delete-qxsl4_cinder-kuttl-tests(112a1cde-5990-4140-97dd-c2bbc4f73197)\"" pod="cinder-kuttl-tests/keystone52f2-account-delete-qxsl4" podUID="112a1cde-5990-4140-97dd-c2bbc4f73197" Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.723332 4714 scope.go:117] "RemoveContainer" containerID="5244ecc09d3c74f3a195f77af7759f9d242093032f09d5208383b4619dda295d" Jan 29 16:30:38 crc kubenswrapper[4714]: E0129 16:30:38.727092 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5244ecc09d3c74f3a195f77af7759f9d242093032f09d5208383b4619dda295d\": container with ID starting with 5244ecc09d3c74f3a195f77af7759f9d242093032f09d5208383b4619dda295d not found: ID does not exist" containerID="5244ecc09d3c74f3a195f77af7759f9d242093032f09d5208383b4619dda295d" Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.727136 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5244ecc09d3c74f3a195f77af7759f9d242093032f09d5208383b4619dda295d"} err="failed to get container status \"5244ecc09d3c74f3a195f77af7759f9d242093032f09d5208383b4619dda295d\": rpc error: code = NotFound desc = could not find container \"5244ecc09d3c74f3a195f77af7759f9d242093032f09d5208383b4619dda295d\": container with ID starting with 5244ecc09d3c74f3a195f77af7759f9d242093032f09d5208383b4619dda295d not found: ID does not exist" Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.727193 4714 scope.go:117] "RemoveContainer" containerID="b8914d707e1c8c5545d8b139524bf6fc2f7931a0ae5843dc9cc4a838b4c891ba" Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.736026 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-jg6sh"] Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.761022 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-jg6sh"] Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.798075 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/memcached-0"] Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.800072 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/memcached-0"] Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.819957 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc4cd\" (UniqueName: \"kubernetes.io/projected/4a2536dd-4258-4b5b-863c-a76431c992ee-kube-api-access-rc4cd\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.819998 4714 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a2536dd-4258-4b5b-863c-a76431c992ee-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:38 crc kubenswrapper[4714]: I0129 16:30:38.859555 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/openstack-galera-1" podUID="f8d336f2-b190-4e32-be3a-27fbf0e50a06" containerName="galera" containerID="cri-o://83c3eb1ecb12cd8202e4a2f2b14330aa7199092be01cfe200e08827657c44a8b" gracePeriod=28 Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.006654 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5fc6d4b6f5-9mdcs"] Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.006915 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/cinder-operator-controller-manager-5fc6d4b6f5-9mdcs" podUID="9aa31790-7a3c-4a66-aace-c087c0221c6b" containerName="manager" containerID="cri-o://36513d8368599e830d1d92fc041ab07bd030a4ddf286f084b21d04377e2a5dbd" gracePeriod=10 Jan 29 16:30:39 crc kubenswrapper[4714]: E0129 16:30:39.227601 4714 configmap.go:193] Couldn't get configMap cinder-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 29 16:30:39 crc kubenswrapper[4714]: E0129 16:30:39.227668 4714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/112a1cde-5990-4140-97dd-c2bbc4f73197-operator-scripts podName:112a1cde-5990-4140-97dd-c2bbc4f73197 nodeName:}" failed. No retries permitted until 2026-01-29 16:30:41.227649492 +0000 UTC m=+1247.748150612 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/112a1cde-5990-4140-97dd-c2bbc4f73197-operator-scripts") pod "keystone52f2-account-delete-qxsl4" (UID: "112a1cde-5990-4140-97dd-c2bbc4f73197") : configmap "openstack-scripts" not found Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.333276 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/cinder-operator-index-d7f6m"] Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.333818 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/cinder-operator-index-d7f6m" podUID="87506df3-b56a-4598-8309-e865dc93cf53" containerName="registry-server" containerID="cri-o://ddec768853957c5a9f807fefab5f4e26d2050966765bc9353326a1334ad4a549" gracePeriod=30 Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.387390 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6"] Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.394683 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/60e6154c28b7f915d22d12701456edcde02ff17009fbb6fad32c7757df6b2j6"] Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.439529 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db9b49999-6gd95" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.544645 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5fc6d4b6f5-9mdcs" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.547383 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fc06a535-6f60-438e-b52d-5dc90fae8c67-credential-keys\") pod \"fc06a535-6f60-438e-b52d-5dc90fae8c67\" (UID: \"fc06a535-6f60-438e-b52d-5dc90fae8c67\") " Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.547432 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fc06a535-6f60-438e-b52d-5dc90fae8c67-fernet-keys\") pod \"fc06a535-6f60-438e-b52d-5dc90fae8c67\" (UID: \"fc06a535-6f60-438e-b52d-5dc90fae8c67\") " Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.547580 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc06a535-6f60-438e-b52d-5dc90fae8c67-scripts\") pod \"fc06a535-6f60-438e-b52d-5dc90fae8c67\" (UID: \"fc06a535-6f60-438e-b52d-5dc90fae8c67\") " Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.547629 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc06a535-6f60-438e-b52d-5dc90fae8c67-config-data\") pod \"fc06a535-6f60-438e-b52d-5dc90fae8c67\" (UID: \"fc06a535-6f60-438e-b52d-5dc90fae8c67\") " Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.547656 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2vn9\" (UniqueName: \"kubernetes.io/projected/fc06a535-6f60-438e-b52d-5dc90fae8c67-kube-api-access-c2vn9\") pod \"fc06a535-6f60-438e-b52d-5dc90fae8c67\" (UID: \"fc06a535-6f60-438e-b52d-5dc90fae8c67\") " Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.552348 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc06a535-6f60-438e-b52d-5dc90fae8c67-kube-api-access-c2vn9" (OuterVolumeSpecName: "kube-api-access-c2vn9") pod "fc06a535-6f60-438e-b52d-5dc90fae8c67" (UID: "fc06a535-6f60-438e-b52d-5dc90fae8c67"). InnerVolumeSpecName "kube-api-access-c2vn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.552561 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc06a535-6f60-438e-b52d-5dc90fae8c67-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fc06a535-6f60-438e-b52d-5dc90fae8c67" (UID: "fc06a535-6f60-438e-b52d-5dc90fae8c67"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.554118 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc06a535-6f60-438e-b52d-5dc90fae8c67-scripts" (OuterVolumeSpecName: "scripts") pod "fc06a535-6f60-438e-b52d-5dc90fae8c67" (UID: "fc06a535-6f60-438e-b52d-5dc90fae8c67"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.568143 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc06a535-6f60-438e-b52d-5dc90fae8c67-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fc06a535-6f60-438e-b52d-5dc90fae8c67" (UID: "fc06a535-6f60-438e-b52d-5dc90fae8c67"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:39 crc kubenswrapper[4714]: E0129 16:30:39.568192 4714 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ddec768853957c5a9f807fefab5f4e26d2050966765bc9353326a1334ad4a549 is running failed: container process not found" containerID="ddec768853957c5a9f807fefab5f4e26d2050966765bc9353326a1334ad4a549" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 16:30:39 crc kubenswrapper[4714]: E0129 16:30:39.569285 4714 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ddec768853957c5a9f807fefab5f4e26d2050966765bc9353326a1334ad4a549 is running failed: container process not found" containerID="ddec768853957c5a9f807fefab5f4e26d2050966765bc9353326a1334ad4a549" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.576952 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc06a535-6f60-438e-b52d-5dc90fae8c67-config-data" (OuterVolumeSpecName: "config-data") pod "fc06a535-6f60-438e-b52d-5dc90fae8c67" (UID: "fc06a535-6f60-438e-b52d-5dc90fae8c67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:39 crc kubenswrapper[4714]: E0129 16:30:39.578464 4714 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ddec768853957c5a9f807fefab5f4e26d2050966765bc9353326a1334ad4a549 is running failed: container process not found" containerID="ddec768853957c5a9f807fefab5f4e26d2050966765bc9353326a1334ad4a549" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 16:30:39 crc kubenswrapper[4714]: E0129 16:30:39.578511 4714 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ddec768853957c5a9f807fefab5f4e26d2050966765bc9353326a1334ad4a549 is running failed: container process not found" probeType="Readiness" pod="openstack-operators/cinder-operator-index-d7f6m" podUID="87506df3-b56a-4598-8309-e865dc93cf53" containerName="registry-server" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.646072 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.655527 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4dvn\" (UniqueName: \"kubernetes.io/projected/9aa31790-7a3c-4a66-aace-c087c0221c6b-kube-api-access-t4dvn\") pod \"9aa31790-7a3c-4a66-aace-c087c0221c6b\" (UID: \"9aa31790-7a3c-4a66-aace-c087c0221c6b\") " Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.655594 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9aa31790-7a3c-4a66-aace-c087c0221c6b-apiservice-cert\") pod \"9aa31790-7a3c-4a66-aace-c087c0221c6b\" (UID: \"9aa31790-7a3c-4a66-aace-c087c0221c6b\") " Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.655661 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9aa31790-7a3c-4a66-aace-c087c0221c6b-webhook-cert\") pod \"9aa31790-7a3c-4a66-aace-c087c0221c6b\" (UID: \"9aa31790-7a3c-4a66-aace-c087c0221c6b\") " Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.655987 4714 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc06a535-6f60-438e-b52d-5dc90fae8c67-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.656008 4714 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc06a535-6f60-438e-b52d-5dc90fae8c67-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.656022 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2vn9\" (UniqueName: \"kubernetes.io/projected/fc06a535-6f60-438e-b52d-5dc90fae8c67-kube-api-access-c2vn9\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.656036 4714 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fc06a535-6f60-438e-b52d-5dc90fae8c67-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.656047 4714 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fc06a535-6f60-438e-b52d-5dc90fae8c67-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.663829 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aa31790-7a3c-4a66-aace-c087c0221c6b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "9aa31790-7a3c-4a66-aace-c087c0221c6b" (UID: "9aa31790-7a3c-4a66-aace-c087c0221c6b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.674192 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aa31790-7a3c-4a66-aace-c087c0221c6b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "9aa31790-7a3c-4a66-aace-c087c0221c6b" (UID: "9aa31790-7a3c-4a66-aace-c087c0221c6b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.674241 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aa31790-7a3c-4a66-aace-c087c0221c6b-kube-api-access-t4dvn" (OuterVolumeSpecName: "kube-api-access-t4dvn") pod "9aa31790-7a3c-4a66-aace-c087c0221c6b" (UID: "9aa31790-7a3c-4a66-aace-c087c0221c6b"). InnerVolumeSpecName "kube-api-access-t4dvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.692916 4714 generic.go:334] "Generic (PLEG): container finished" podID="87506df3-b56a-4598-8309-e865dc93cf53" containerID="ddec768853957c5a9f807fefab5f4e26d2050966765bc9353326a1334ad4a549" exitCode=0 Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.692991 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-index-d7f6m" event={"ID":"87506df3-b56a-4598-8309-e865dc93cf53","Type":"ContainerDied","Data":"ddec768853957c5a9f807fefab5f4e26d2050966765bc9353326a1334ad4a549"} Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.696861 4714 generic.go:334] "Generic (PLEG): container finished" podID="fc06a535-6f60-438e-b52d-5dc90fae8c67" containerID="4adeb441736bb7d27c549844b9147979d27f5dbed7e42a9b990619d275a00000" exitCode=0 Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.696947 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db9b49999-6gd95" event={"ID":"fc06a535-6f60-438e-b52d-5dc90fae8c67","Type":"ContainerDied","Data":"4adeb441736bb7d27c549844b9147979d27f5dbed7e42a9b990619d275a00000"} Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.696978 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db9b49999-6gd95" event={"ID":"fc06a535-6f60-438e-b52d-5dc90fae8c67","Type":"ContainerDied","Data":"a35aebf0427b3f34153f6b20222d8016725e500d95a3072063dc0d02bd8d902e"} Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.697000 4714 scope.go:117] "RemoveContainer" containerID="4adeb441736bb7d27c549844b9147979d27f5dbed7e42a9b990619d275a00000" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.697112 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db9b49999-6gd95" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.710355 4714 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="cinder-kuttl-tests/keystone52f2-account-delete-qxsl4" secret="" err="secret \"galera-openstack-dockercfg-qdqvq\" not found" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.711986 4714 scope.go:117] "RemoveContainer" containerID="cf816b8f59cc18e67ef8edcaed9119f0334f92970ee8c49b8887e2bc061f93ab" Jan 29 16:30:39 crc kubenswrapper[4714]: E0129 16:30:39.712676 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-delete\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-delete pod=keystone52f2-account-delete-qxsl4_cinder-kuttl-tests(112a1cde-5990-4140-97dd-c2bbc4f73197)\"" pod="cinder-kuttl-tests/keystone52f2-account-delete-qxsl4" podUID="112a1cde-5990-4140-97dd-c2bbc4f73197" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.713650 4714 generic.go:334] "Generic (PLEG): container finished" podID="55e23ac1-a89b-4689-a17d-bee875f7783e" containerID="ee125ef7ce148229ad82b22d2a7578dd54b32a1f3c67998b7176056878a4676a" exitCode=0 Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.713693 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.713766 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/rabbitmq-server-0" event={"ID":"55e23ac1-a89b-4689-a17d-bee875f7783e","Type":"ContainerDied","Data":"ee125ef7ce148229ad82b22d2a7578dd54b32a1f3c67998b7176056878a4676a"} Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.713844 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/rabbitmq-server-0" event={"ID":"55e23ac1-a89b-4689-a17d-bee875f7783e","Type":"ContainerDied","Data":"299eb5904909cd50412aad25a871f9888d290b3d4f1acfe53103c96e6f05a1bc"} Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.715298 4714 generic.go:334] "Generic (PLEG): container finished" podID="9aa31790-7a3c-4a66-aace-c087c0221c6b" containerID="36513d8368599e830d1d92fc041ab07bd030a4ddf286f084b21d04377e2a5dbd" exitCode=0 Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.715487 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5fc6d4b6f5-9mdcs" event={"ID":"9aa31790-7a3c-4a66-aace-c087c0221c6b","Type":"ContainerDied","Data":"36513d8368599e830d1d92fc041ab07bd030a4ddf286f084b21d04377e2a5dbd"} Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.715556 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5fc6d4b6f5-9mdcs" event={"ID":"9aa31790-7a3c-4a66-aace-c087c0221c6b","Type":"ContainerDied","Data":"c82d440d4556664aec8a776ead06b3d925538dac3151f1c3b85d4cf089d48d43"} Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.715640 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5fc6d4b6f5-9mdcs" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.727873 4714 scope.go:117] "RemoveContainer" containerID="4adeb441736bb7d27c549844b9147979d27f5dbed7e42a9b990619d275a00000" Jan 29 16:30:39 crc kubenswrapper[4714]: E0129 16:30:39.728465 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4adeb441736bb7d27c549844b9147979d27f5dbed7e42a9b990619d275a00000\": container with ID starting with 4adeb441736bb7d27c549844b9147979d27f5dbed7e42a9b990619d275a00000 not found: ID does not exist" containerID="4adeb441736bb7d27c549844b9147979d27f5dbed7e42a9b990619d275a00000" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.728499 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4adeb441736bb7d27c549844b9147979d27f5dbed7e42a9b990619d275a00000"} err="failed to get container status \"4adeb441736bb7d27c549844b9147979d27f5dbed7e42a9b990619d275a00000\": rpc error: code = NotFound desc = could not find container \"4adeb441736bb7d27c549844b9147979d27f5dbed7e42a9b990619d275a00000\": container with ID starting with 4adeb441736bb7d27c549844b9147979d27f5dbed7e42a9b990619d275a00000 not found: ID does not exist" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.728544 4714 scope.go:117] "RemoveContainer" containerID="ee125ef7ce148229ad82b22d2a7578dd54b32a1f3c67998b7176056878a4676a" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.744839 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone-db9b49999-6gd95"] Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.750610 4714 scope.go:117] "RemoveContainer" containerID="eb95fbc4965d0aeffea6fce3a32742ee3a1bc5446fd33b5341ed6d4be042d493" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.756009 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/keystone-db9b49999-6gd95"] Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.758519 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21c03d9a-bbed-44e7-9b38-49f8915c54d2\") pod \"55e23ac1-a89b-4689-a17d-bee875f7783e\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.758631 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pgjf\" (UniqueName: \"kubernetes.io/projected/55e23ac1-a89b-4689-a17d-bee875f7783e-kube-api-access-8pgjf\") pod \"55e23ac1-a89b-4689-a17d-bee875f7783e\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.758707 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/55e23ac1-a89b-4689-a17d-bee875f7783e-plugins-conf\") pod \"55e23ac1-a89b-4689-a17d-bee875f7783e\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.758804 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/55e23ac1-a89b-4689-a17d-bee875f7783e-rabbitmq-confd\") pod \"55e23ac1-a89b-4689-a17d-bee875f7783e\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.758883 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/55e23ac1-a89b-4689-a17d-bee875f7783e-rabbitmq-erlang-cookie\") pod \"55e23ac1-a89b-4689-a17d-bee875f7783e\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.758975 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/55e23ac1-a89b-4689-a17d-bee875f7783e-rabbitmq-plugins\") pod \"55e23ac1-a89b-4689-a17d-bee875f7783e\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.759826 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/55e23ac1-a89b-4689-a17d-bee875f7783e-erlang-cookie-secret\") pod \"55e23ac1-a89b-4689-a17d-bee875f7783e\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.759968 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/55e23ac1-a89b-4689-a17d-bee875f7783e-pod-info\") pod \"55e23ac1-a89b-4689-a17d-bee875f7783e\" (UID: \"55e23ac1-a89b-4689-a17d-bee875f7783e\") " Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.759611 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55e23ac1-a89b-4689-a17d-bee875f7783e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "55e23ac1-a89b-4689-a17d-bee875f7783e" (UID: "55e23ac1-a89b-4689-a17d-bee875f7783e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.759642 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55e23ac1-a89b-4689-a17d-bee875f7783e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "55e23ac1-a89b-4689-a17d-bee875f7783e" (UID: "55e23ac1-a89b-4689-a17d-bee875f7783e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.759818 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55e23ac1-a89b-4689-a17d-bee875f7783e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "55e23ac1-a89b-4689-a17d-bee875f7783e" (UID: "55e23ac1-a89b-4689-a17d-bee875f7783e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.760839 4714 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9aa31790-7a3c-4a66-aace-c087c0221c6b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.761173 4714 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/55e23ac1-a89b-4689-a17d-bee875f7783e-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.761227 4714 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/55e23ac1-a89b-4689-a17d-bee875f7783e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.761350 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4dvn\" (UniqueName: \"kubernetes.io/projected/9aa31790-7a3c-4a66-aace-c087c0221c6b-kube-api-access-t4dvn\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.761419 4714 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/55e23ac1-a89b-4689-a17d-bee875f7783e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.761494 4714 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9aa31790-7a3c-4a66-aace-c087c0221c6b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.761822 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55e23ac1-a89b-4689-a17d-bee875f7783e-kube-api-access-8pgjf" (OuterVolumeSpecName: "kube-api-access-8pgjf") pod "55e23ac1-a89b-4689-a17d-bee875f7783e" (UID: "55e23ac1-a89b-4689-a17d-bee875f7783e"). InnerVolumeSpecName "kube-api-access-8pgjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.762180 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5fc6d4b6f5-9mdcs"] Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.763574 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55e23ac1-a89b-4689-a17d-bee875f7783e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "55e23ac1-a89b-4689-a17d-bee875f7783e" (UID: "55e23ac1-a89b-4689-a17d-bee875f7783e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.764243 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/55e23ac1-a89b-4689-a17d-bee875f7783e-pod-info" (OuterVolumeSpecName: "pod-info") pod "55e23ac1-a89b-4689-a17d-bee875f7783e" (UID: "55e23ac1-a89b-4689-a17d-bee875f7783e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.767690 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5fc6d4b6f5-9mdcs"] Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.776871 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21c03d9a-bbed-44e7-9b38-49f8915c54d2" (OuterVolumeSpecName: "persistence") pod "55e23ac1-a89b-4689-a17d-bee875f7783e" (UID: "55e23ac1-a89b-4689-a17d-bee875f7783e"). InnerVolumeSpecName "pvc-21c03d9a-bbed-44e7-9b38-49f8915c54d2". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.777803 4714 scope.go:117] "RemoveContainer" containerID="ee125ef7ce148229ad82b22d2a7578dd54b32a1f3c67998b7176056878a4676a" Jan 29 16:30:39 crc kubenswrapper[4714]: E0129 16:30:39.779221 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee125ef7ce148229ad82b22d2a7578dd54b32a1f3c67998b7176056878a4676a\": container with ID starting with ee125ef7ce148229ad82b22d2a7578dd54b32a1f3c67998b7176056878a4676a not found: ID does not exist" containerID="ee125ef7ce148229ad82b22d2a7578dd54b32a1f3c67998b7176056878a4676a" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.779279 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee125ef7ce148229ad82b22d2a7578dd54b32a1f3c67998b7176056878a4676a"} err="failed to get container status \"ee125ef7ce148229ad82b22d2a7578dd54b32a1f3c67998b7176056878a4676a\": rpc error: code = NotFound desc = could not find container \"ee125ef7ce148229ad82b22d2a7578dd54b32a1f3c67998b7176056878a4676a\": container with ID starting with ee125ef7ce148229ad82b22d2a7578dd54b32a1f3c67998b7176056878a4676a not found: ID does not exist" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.779305 4714 scope.go:117] "RemoveContainer" containerID="eb95fbc4965d0aeffea6fce3a32742ee3a1bc5446fd33b5341ed6d4be042d493" Jan 29 16:30:39 crc kubenswrapper[4714]: E0129 16:30:39.779561 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb95fbc4965d0aeffea6fce3a32742ee3a1bc5446fd33b5341ed6d4be042d493\": container with ID starting with eb95fbc4965d0aeffea6fce3a32742ee3a1bc5446fd33b5341ed6d4be042d493 not found: ID does not exist" containerID="eb95fbc4965d0aeffea6fce3a32742ee3a1bc5446fd33b5341ed6d4be042d493" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.779606 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb95fbc4965d0aeffea6fce3a32742ee3a1bc5446fd33b5341ed6d4be042d493"} err="failed to get container status \"eb95fbc4965d0aeffea6fce3a32742ee3a1bc5446fd33b5341ed6d4be042d493\": rpc error: code = NotFound desc = could not find container \"eb95fbc4965d0aeffea6fce3a32742ee3a1bc5446fd33b5341ed6d4be042d493\": container with ID starting with eb95fbc4965d0aeffea6fce3a32742ee3a1bc5446fd33b5341ed6d4be042d493 not found: ID does not exist" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.779621 4714 scope.go:117] "RemoveContainer" containerID="36513d8368599e830d1d92fc041ab07bd030a4ddf286f084b21d04377e2a5dbd" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.779867 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-index-d7f6m" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.802623 4714 scope.go:117] "RemoveContainer" containerID="36513d8368599e830d1d92fc041ab07bd030a4ddf286f084b21d04377e2a5dbd" Jan 29 16:30:39 crc kubenswrapper[4714]: E0129 16:30:39.803118 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36513d8368599e830d1d92fc041ab07bd030a4ddf286f084b21d04377e2a5dbd\": container with ID starting with 36513d8368599e830d1d92fc041ab07bd030a4ddf286f084b21d04377e2a5dbd not found: ID does not exist" containerID="36513d8368599e830d1d92fc041ab07bd030a4ddf286f084b21d04377e2a5dbd" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.803179 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36513d8368599e830d1d92fc041ab07bd030a4ddf286f084b21d04377e2a5dbd"} err="failed to get container status \"36513d8368599e830d1d92fc041ab07bd030a4ddf286f084b21d04377e2a5dbd\": rpc error: code = NotFound desc = could not find container \"36513d8368599e830d1d92fc041ab07bd030a4ddf286f084b21d04377e2a5dbd\": container with ID starting with 36513d8368599e830d1d92fc041ab07bd030a4ddf286f084b21d04377e2a5dbd not found: ID does not exist" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.831611 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55e23ac1-a89b-4689-a17d-bee875f7783e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "55e23ac1-a89b-4689-a17d-bee875f7783e" (UID: "55e23ac1-a89b-4689-a17d-bee875f7783e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.862837 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6lgj\" (UniqueName: \"kubernetes.io/projected/87506df3-b56a-4598-8309-e865dc93cf53-kube-api-access-m6lgj\") pod \"87506df3-b56a-4598-8309-e865dc93cf53\" (UID: \"87506df3-b56a-4598-8309-e865dc93cf53\") " Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.863234 4714 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/55e23ac1-a89b-4689-a17d-bee875f7783e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.863276 4714 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/55e23ac1-a89b-4689-a17d-bee875f7783e-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.863315 4714 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-21c03d9a-bbed-44e7-9b38-49f8915c54d2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21c03d9a-bbed-44e7-9b38-49f8915c54d2\") on node \"crc\" " Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.863330 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pgjf\" (UniqueName: \"kubernetes.io/projected/55e23ac1-a89b-4689-a17d-bee875f7783e-kube-api-access-8pgjf\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.863367 4714 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/55e23ac1-a89b-4689-a17d-bee875f7783e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.865791 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87506df3-b56a-4598-8309-e865dc93cf53-kube-api-access-m6lgj" (OuterVolumeSpecName: "kube-api-access-m6lgj") pod "87506df3-b56a-4598-8309-e865dc93cf53" (UID: "87506df3-b56a-4598-8309-e865dc93cf53"). InnerVolumeSpecName "kube-api-access-m6lgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.882821 4714 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.883041 4714 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-21c03d9a-bbed-44e7-9b38-49f8915c54d2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21c03d9a-bbed-44e7-9b38-49f8915c54d2") on node "crc" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.965035 4714 reconciler_common.go:293] "Volume detached for volume \"pvc-21c03d9a-bbed-44e7-9b38-49f8915c54d2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21c03d9a-bbed-44e7-9b38-49f8915c54d2\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:39 crc kubenswrapper[4714]: I0129 16:30:39.965091 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6lgj\" (UniqueName: \"kubernetes.io/projected/87506df3-b56a-4598-8309-e865dc93cf53-kube-api-access-m6lgj\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.044375 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.048475 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.191598 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eecc358-9581-489e-97ae-f600d35a7613" path="/var/lib/kubelet/pods/0eecc358-9581-489e-97ae-f600d35a7613/volumes" Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.192428 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a2536dd-4258-4b5b-863c-a76431c992ee" path="/var/lib/kubelet/pods/4a2536dd-4258-4b5b-863c-a76431c992ee/volumes" Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.192959 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55e23ac1-a89b-4689-a17d-bee875f7783e" path="/var/lib/kubelet/pods/55e23ac1-a89b-4689-a17d-bee875f7783e/volumes" Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.194073 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aa31790-7a3c-4a66-aace-c087c0221c6b" path="/var/lib/kubelet/pods/9aa31790-7a3c-4a66-aace-c087c0221c6b/volumes" Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.194833 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea" path="/var/lib/kubelet/pods/d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea/volumes" Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.195366 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc06a535-6f60-438e-b52d-5dc90fae8c67" path="/var/lib/kubelet/pods/fc06a535-6f60-438e-b52d-5dc90fae8c67/volumes" Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.726148 4714 generic.go:334] "Generic (PLEG): container finished" podID="f8d336f2-b190-4e32-be3a-27fbf0e50a06" containerID="83c3eb1ecb12cd8202e4a2f2b14330aa7199092be01cfe200e08827657c44a8b" exitCode=0 Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.726239 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-1" event={"ID":"f8d336f2-b190-4e32-be3a-27fbf0e50a06","Type":"ContainerDied","Data":"83c3eb1ecb12cd8202e4a2f2b14330aa7199092be01cfe200e08827657c44a8b"} Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.728291 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-index-d7f6m" event={"ID":"87506df3-b56a-4598-8309-e865dc93cf53","Type":"ContainerDied","Data":"6f87b469ba7b044e8c1285048b5bce1bfd385e6eeae34b8f824127e313741cf2"} Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.728341 4714 scope.go:117] "RemoveContainer" containerID="ddec768853957c5a9f807fefab5f4e26d2050966765bc9353326a1334ad4a549" Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.728434 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-index-d7f6m" Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.767033 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/cinder-operator-index-d7f6m"] Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.769731 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/cinder-operator-index-d7f6m"] Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.800035 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone-db-create-4hhfq"] Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.809611 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/keystone-db-create-4hhfq"] Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.819091 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone-52f2-account-create-update-5x6mg"] Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.822854 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone52f2-account-delete-qxsl4"] Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.828994 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/keystone-52f2-account-create-update-5x6mg"] Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.868141 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-1" Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.883947 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/openstack-galera-0" podUID="e27f02c1-a7d5-4d49-838b-df5445720a07" containerName="galera" containerID="cri-o://b027a08741bc7af4590803c43740e388681a5f71be01b0ea2974650f0e3e74fc" gracePeriod=26 Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.981307 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f8d336f2-b190-4e32-be3a-27fbf0e50a06-config-data-generated\") pod \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\" (UID: \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\") " Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.981419 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f8d336f2-b190-4e32-be3a-27fbf0e50a06-config-data-default\") pod \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\" (UID: \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\") " Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.981487 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f8d336f2-b190-4e32-be3a-27fbf0e50a06-kolla-config\") pod \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\" (UID: \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\") " Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.981551 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8d336f2-b190-4e32-be3a-27fbf0e50a06-operator-scripts\") pod \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\" (UID: \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\") " Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.981580 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\" (UID: \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\") " Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.981726 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhbgj\" (UniqueName: \"kubernetes.io/projected/f8d336f2-b190-4e32-be3a-27fbf0e50a06-kube-api-access-lhbgj\") pod \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\" (UID: \"f8d336f2-b190-4e32-be3a-27fbf0e50a06\") " Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.982742 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8d336f2-b190-4e32-be3a-27fbf0e50a06-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "f8d336f2-b190-4e32-be3a-27fbf0e50a06" (UID: "f8d336f2-b190-4e32-be3a-27fbf0e50a06"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.983435 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8d336f2-b190-4e32-be3a-27fbf0e50a06-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "f8d336f2-b190-4e32-be3a-27fbf0e50a06" (UID: "f8d336f2-b190-4e32-be3a-27fbf0e50a06"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.983694 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8d336f2-b190-4e32-be3a-27fbf0e50a06-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f8d336f2-b190-4e32-be3a-27fbf0e50a06" (UID: "f8d336f2-b190-4e32-be3a-27fbf0e50a06"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.983987 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8d336f2-b190-4e32-be3a-27fbf0e50a06-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "f8d336f2-b190-4e32-be3a-27fbf0e50a06" (UID: "f8d336f2-b190-4e32-be3a-27fbf0e50a06"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:40 crc kubenswrapper[4714]: I0129 16:30:40.995792 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8d336f2-b190-4e32-be3a-27fbf0e50a06-kube-api-access-lhbgj" (OuterVolumeSpecName: "kube-api-access-lhbgj") pod "f8d336f2-b190-4e32-be3a-27fbf0e50a06" (UID: "f8d336f2-b190-4e32-be3a-27fbf0e50a06"). InnerVolumeSpecName "kube-api-access-lhbgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.002339 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "mysql-db") pod "f8d336f2-b190-4e32-be3a-27fbf0e50a06" (UID: "f8d336f2-b190-4e32-be3a-27fbf0e50a06"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.083466 4714 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f8d336f2-b190-4e32-be3a-27fbf0e50a06-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.083830 4714 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f8d336f2-b190-4e32-be3a-27fbf0e50a06-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.083845 4714 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8d336f2-b190-4e32-be3a-27fbf0e50a06-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.083873 4714 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.083886 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhbgj\" (UniqueName: \"kubernetes.io/projected/f8d336f2-b190-4e32-be3a-27fbf0e50a06-kube-api-access-lhbgj\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.083900 4714 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f8d336f2-b190-4e32-be3a-27fbf0e50a06-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.090512 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone52f2-account-delete-qxsl4" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.098923 4714 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.184442 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrfh5\" (UniqueName: \"kubernetes.io/projected/112a1cde-5990-4140-97dd-c2bbc4f73197-kube-api-access-rrfh5\") pod \"112a1cde-5990-4140-97dd-c2bbc4f73197\" (UID: \"112a1cde-5990-4140-97dd-c2bbc4f73197\") " Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.184546 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/112a1cde-5990-4140-97dd-c2bbc4f73197-operator-scripts\") pod \"112a1cde-5990-4140-97dd-c2bbc4f73197\" (UID: \"112a1cde-5990-4140-97dd-c2bbc4f73197\") " Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.184898 4714 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.185255 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/112a1cde-5990-4140-97dd-c2bbc4f73197-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "112a1cde-5990-4140-97dd-c2bbc4f73197" (UID: "112a1cde-5990-4140-97dd-c2bbc4f73197"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.189779 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/112a1cde-5990-4140-97dd-c2bbc4f73197-kube-api-access-rrfh5" (OuterVolumeSpecName: "kube-api-access-rrfh5") pod "112a1cde-5990-4140-97dd-c2bbc4f73197" (UID: "112a1cde-5990-4140-97dd-c2bbc4f73197"). InnerVolumeSpecName "kube-api-access-rrfh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.286497 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrfh5\" (UniqueName: \"kubernetes.io/projected/112a1cde-5990-4140-97dd-c2bbc4f73197-kube-api-access-rrfh5\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.286532 4714 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/112a1cde-5990-4140-97dd-c2bbc4f73197-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.585214 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-0" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.690429 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e27f02c1-a7d5-4d49-838b-df5445720a07-config-data-default\") pod \"e27f02c1-a7d5-4d49-838b-df5445720a07\" (UID: \"e27f02c1-a7d5-4d49-838b-df5445720a07\") " Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.690490 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6vvb\" (UniqueName: \"kubernetes.io/projected/e27f02c1-a7d5-4d49-838b-df5445720a07-kube-api-access-p6vvb\") pod \"e27f02c1-a7d5-4d49-838b-df5445720a07\" (UID: \"e27f02c1-a7d5-4d49-838b-df5445720a07\") " Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.690571 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e27f02c1-a7d5-4d49-838b-df5445720a07-operator-scripts\") pod \"e27f02c1-a7d5-4d49-838b-df5445720a07\" (UID: \"e27f02c1-a7d5-4d49-838b-df5445720a07\") " Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.690590 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e27f02c1-a7d5-4d49-838b-df5445720a07-kolla-config\") pod \"e27f02c1-a7d5-4d49-838b-df5445720a07\" (UID: \"e27f02c1-a7d5-4d49-838b-df5445720a07\") " Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.690616 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"e27f02c1-a7d5-4d49-838b-df5445720a07\" (UID: \"e27f02c1-a7d5-4d49-838b-df5445720a07\") " Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.690636 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e27f02c1-a7d5-4d49-838b-df5445720a07-config-data-generated\") pod \"e27f02c1-a7d5-4d49-838b-df5445720a07\" (UID: \"e27f02c1-a7d5-4d49-838b-df5445720a07\") " Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.691198 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e27f02c1-a7d5-4d49-838b-df5445720a07-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "e27f02c1-a7d5-4d49-838b-df5445720a07" (UID: "e27f02c1-a7d5-4d49-838b-df5445720a07"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.691211 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e27f02c1-a7d5-4d49-838b-df5445720a07-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "e27f02c1-a7d5-4d49-838b-df5445720a07" (UID: "e27f02c1-a7d5-4d49-838b-df5445720a07"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.691344 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e27f02c1-a7d5-4d49-838b-df5445720a07-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "e27f02c1-a7d5-4d49-838b-df5445720a07" (UID: "e27f02c1-a7d5-4d49-838b-df5445720a07"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.691453 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e27f02c1-a7d5-4d49-838b-df5445720a07-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e27f02c1-a7d5-4d49-838b-df5445720a07" (UID: "e27f02c1-a7d5-4d49-838b-df5445720a07"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.694569 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e27f02c1-a7d5-4d49-838b-df5445720a07-kube-api-access-p6vvb" (OuterVolumeSpecName: "kube-api-access-p6vvb") pod "e27f02c1-a7d5-4d49-838b-df5445720a07" (UID: "e27f02c1-a7d5-4d49-838b-df5445720a07"). InnerVolumeSpecName "kube-api-access-p6vvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.701199 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "mysql-db") pod "e27f02c1-a7d5-4d49-838b-df5445720a07" (UID: "e27f02c1-a7d5-4d49-838b-df5445720a07"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.747967 4714 generic.go:334] "Generic (PLEG): container finished" podID="e27f02c1-a7d5-4d49-838b-df5445720a07" containerID="b027a08741bc7af4590803c43740e388681a5f71be01b0ea2974650f0e3e74fc" exitCode=0 Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.748026 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-0" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.748041 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-0" event={"ID":"e27f02c1-a7d5-4d49-838b-df5445720a07","Type":"ContainerDied","Data":"b027a08741bc7af4590803c43740e388681a5f71be01b0ea2974650f0e3e74fc"} Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.748293 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-0" event={"ID":"e27f02c1-a7d5-4d49-838b-df5445720a07","Type":"ContainerDied","Data":"37a7ed7cc71d5ba4399190ff48b8e2d70a326be5e9ad5c8773900669dfc3740e"} Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.748320 4714 scope.go:117] "RemoveContainer" containerID="b027a08741bc7af4590803c43740e388681a5f71be01b0ea2974650f0e3e74fc" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.751184 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-1" event={"ID":"f8d336f2-b190-4e32-be3a-27fbf0e50a06","Type":"ContainerDied","Data":"2730c5f20c68259dc37f00e3d986e43810a9d5fe85207a2c704b5656855e553b"} Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.751263 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-1" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.758511 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone52f2-account-delete-qxsl4" event={"ID":"112a1cde-5990-4140-97dd-c2bbc4f73197","Type":"ContainerDied","Data":"886af71ec9bb3cd0c3ff50ec50762de0bc190d913b4893fee8dbe7877dbfc19b"} Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.758552 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone52f2-account-delete-qxsl4" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.768468 4714 scope.go:117] "RemoveContainer" containerID="2c8a621ff05274e374040ac76aa490fcc2858782c45b93e547a0c03c210a4530" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.782162 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/openstack-galera-0"] Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.786755 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/openstack-galera-0"] Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.793113 4714 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e27f02c1-a7d5-4d49-838b-df5445720a07-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.793159 4714 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e27f02c1-a7d5-4d49-838b-df5445720a07-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.793203 4714 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.793223 4714 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e27f02c1-a7d5-4d49-838b-df5445720a07-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.793245 4714 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e27f02c1-a7d5-4d49-838b-df5445720a07-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.793264 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6vvb\" (UniqueName: \"kubernetes.io/projected/e27f02c1-a7d5-4d49-838b-df5445720a07-kube-api-access-p6vvb\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.794289 4714 scope.go:117] "RemoveContainer" containerID="b027a08741bc7af4590803c43740e388681a5f71be01b0ea2974650f0e3e74fc" Jan 29 16:30:41 crc kubenswrapper[4714]: E0129 16:30:41.795132 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b027a08741bc7af4590803c43740e388681a5f71be01b0ea2974650f0e3e74fc\": container with ID starting with b027a08741bc7af4590803c43740e388681a5f71be01b0ea2974650f0e3e74fc not found: ID does not exist" containerID="b027a08741bc7af4590803c43740e388681a5f71be01b0ea2974650f0e3e74fc" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.795162 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b027a08741bc7af4590803c43740e388681a5f71be01b0ea2974650f0e3e74fc"} err="failed to get container status \"b027a08741bc7af4590803c43740e388681a5f71be01b0ea2974650f0e3e74fc\": rpc error: code = NotFound desc = could not find container \"b027a08741bc7af4590803c43740e388681a5f71be01b0ea2974650f0e3e74fc\": container with ID starting with b027a08741bc7af4590803c43740e388681a5f71be01b0ea2974650f0e3e74fc not found: ID does not exist" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.795187 4714 scope.go:117] "RemoveContainer" containerID="2c8a621ff05274e374040ac76aa490fcc2858782c45b93e547a0c03c210a4530" Jan 29 16:30:41 crc kubenswrapper[4714]: E0129 16:30:41.797288 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c8a621ff05274e374040ac76aa490fcc2858782c45b93e547a0c03c210a4530\": container with ID starting with 2c8a621ff05274e374040ac76aa490fcc2858782c45b93e547a0c03c210a4530 not found: ID does not exist" containerID="2c8a621ff05274e374040ac76aa490fcc2858782c45b93e547a0c03c210a4530" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.797312 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c8a621ff05274e374040ac76aa490fcc2858782c45b93e547a0c03c210a4530"} err="failed to get container status \"2c8a621ff05274e374040ac76aa490fcc2858782c45b93e547a0c03c210a4530\": rpc error: code = NotFound desc = could not find container \"2c8a621ff05274e374040ac76aa490fcc2858782c45b93e547a0c03c210a4530\": container with ID starting with 2c8a621ff05274e374040ac76aa490fcc2858782c45b93e547a0c03c210a4530 not found: ID does not exist" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.797326 4714 scope.go:117] "RemoveContainer" containerID="83c3eb1ecb12cd8202e4a2f2b14330aa7199092be01cfe200e08827657c44a8b" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.803276 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone52f2-account-delete-qxsl4"] Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.812395 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/keystone52f2-account-delete-qxsl4"] Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.815763 4714 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.818553 4714 scope.go:117] "RemoveContainer" containerID="7b624009e8962fd057296e2a9f997c5b0aab61c294b85ef5b94d41ebe8dd89e7" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.821343 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/openstack-galera-1"] Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.828402 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/openstack-galera-1"] Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.846265 4714 scope.go:117] "RemoveContainer" containerID="cf816b8f59cc18e67ef8edcaed9119f0334f92970ee8c49b8887e2bc061f93ab" Jan 29 16:30:41 crc kubenswrapper[4714]: I0129 16:30:41.894856 4714 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:42 crc kubenswrapper[4714]: I0129 16:30:42.192773 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="112a1cde-5990-4140-97dd-c2bbc4f73197" path="/var/lib/kubelet/pods/112a1cde-5990-4140-97dd-c2bbc4f73197/volumes" Jan 29 16:30:42 crc kubenswrapper[4714]: I0129 16:30:42.193432 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bf4c895-a323-452d-8329-cb69a752341c" path="/var/lib/kubelet/pods/4bf4c895-a323-452d-8329-cb69a752341c/volumes" Jan 29 16:30:42 crc kubenswrapper[4714]: I0129 16:30:42.193974 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87506df3-b56a-4598-8309-e865dc93cf53" path="/var/lib/kubelet/pods/87506df3-b56a-4598-8309-e865dc93cf53/volumes" Jan 29 16:30:42 crc kubenswrapper[4714]: I0129 16:30:42.195651 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e27f02c1-a7d5-4d49-838b-df5445720a07" path="/var/lib/kubelet/pods/e27f02c1-a7d5-4d49-838b-df5445720a07/volumes" Jan 29 16:30:42 crc kubenswrapper[4714]: I0129 16:30:42.196501 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6c31118-0f0d-46fb-a9fc-d135e234fe41" path="/var/lib/kubelet/pods/f6c31118-0f0d-46fb-a9fc-d135e234fe41/volumes" Jan 29 16:30:42 crc kubenswrapper[4714]: I0129 16:30:42.197351 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8d336f2-b190-4e32-be3a-27fbf0e50a06" path="/var/lib/kubelet/pods/f8d336f2-b190-4e32-be3a-27fbf0e50a06/volumes" Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.022785 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5b97656f4c-wwx28"] Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.023635 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-controller-manager-5b97656f4c-wwx28" podUID="5d602ee5-4171-4dc7-9852-88c6019696e1" containerName="manager" containerID="cri-o://0389615fc2e1a33814fa46fa1999a6f01822d1c7cd6bb84e89e87e7102d3acc5" gracePeriod=10 Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.266542 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-vtc5h"] Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.266755 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-index-vtc5h" podUID="668764e7-6295-4275-bcc9-24b680ec685f" containerName="registry-server" containerID="cri-o://dc243f10cd3a85e410a8968c999f4a42fc022a8b377403e54fe6779138d67aa8" gracePeriod=30 Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.301528 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg"] Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.306066 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efqppkg"] Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.496826 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-5b97656f4c-wwx28" Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.631003 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7sdp\" (UniqueName: \"kubernetes.io/projected/5d602ee5-4171-4dc7-9852-88c6019696e1-kube-api-access-p7sdp\") pod \"5d602ee5-4171-4dc7-9852-88c6019696e1\" (UID: \"5d602ee5-4171-4dc7-9852-88c6019696e1\") " Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.631047 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5d602ee5-4171-4dc7-9852-88c6019696e1-apiservice-cert\") pod \"5d602ee5-4171-4dc7-9852-88c6019696e1\" (UID: \"5d602ee5-4171-4dc7-9852-88c6019696e1\") " Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.631068 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5d602ee5-4171-4dc7-9852-88c6019696e1-webhook-cert\") pod \"5d602ee5-4171-4dc7-9852-88c6019696e1\" (UID: \"5d602ee5-4171-4dc7-9852-88c6019696e1\") " Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.635488 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d602ee5-4171-4dc7-9852-88c6019696e1-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "5d602ee5-4171-4dc7-9852-88c6019696e1" (UID: "5d602ee5-4171-4dc7-9852-88c6019696e1"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.635508 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d602ee5-4171-4dc7-9852-88c6019696e1-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "5d602ee5-4171-4dc7-9852-88c6019696e1" (UID: "5d602ee5-4171-4dc7-9852-88c6019696e1"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.637046 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d602ee5-4171-4dc7-9852-88c6019696e1-kube-api-access-p7sdp" (OuterVolumeSpecName: "kube-api-access-p7sdp") pod "5d602ee5-4171-4dc7-9852-88c6019696e1" (UID: "5d602ee5-4171-4dc7-9852-88c6019696e1"). InnerVolumeSpecName "kube-api-access-p7sdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.683404 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-vtc5h" Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.733417 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsc27\" (UniqueName: \"kubernetes.io/projected/668764e7-6295-4275-bcc9-24b680ec685f-kube-api-access-wsc27\") pod \"668764e7-6295-4275-bcc9-24b680ec685f\" (UID: \"668764e7-6295-4275-bcc9-24b680ec685f\") " Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.733724 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7sdp\" (UniqueName: \"kubernetes.io/projected/5d602ee5-4171-4dc7-9852-88c6019696e1-kube-api-access-p7sdp\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.733742 4714 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5d602ee5-4171-4dc7-9852-88c6019696e1-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.733751 4714 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5d602ee5-4171-4dc7-9852-88c6019696e1-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.736372 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/668764e7-6295-4275-bcc9-24b680ec685f-kube-api-access-wsc27" (OuterVolumeSpecName: "kube-api-access-wsc27") pod "668764e7-6295-4275-bcc9-24b680ec685f" (UID: "668764e7-6295-4275-bcc9-24b680ec685f"). InnerVolumeSpecName "kube-api-access-wsc27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.779510 4714 generic.go:334] "Generic (PLEG): container finished" podID="5d602ee5-4171-4dc7-9852-88c6019696e1" containerID="0389615fc2e1a33814fa46fa1999a6f01822d1c7cd6bb84e89e87e7102d3acc5" exitCode=0 Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.779568 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5b97656f4c-wwx28" event={"ID":"5d602ee5-4171-4dc7-9852-88c6019696e1","Type":"ContainerDied","Data":"0389615fc2e1a33814fa46fa1999a6f01822d1c7cd6bb84e89e87e7102d3acc5"} Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.779598 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5b97656f4c-wwx28" event={"ID":"5d602ee5-4171-4dc7-9852-88c6019696e1","Type":"ContainerDied","Data":"68e4148d363b3ff81741fb86667cb2d686f613c7db6f1f53e639c87301659f49"} Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.779617 4714 scope.go:117] "RemoveContainer" containerID="0389615fc2e1a33814fa46fa1999a6f01822d1c7cd6bb84e89e87e7102d3acc5" Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.779708 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-5b97656f4c-wwx28" Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.791246 4714 generic.go:334] "Generic (PLEG): container finished" podID="668764e7-6295-4275-bcc9-24b680ec685f" containerID="dc243f10cd3a85e410a8968c999f4a42fc022a8b377403e54fe6779138d67aa8" exitCode=0 Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.791276 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-vtc5h" Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.791282 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-vtc5h" event={"ID":"668764e7-6295-4275-bcc9-24b680ec685f","Type":"ContainerDied","Data":"dc243f10cd3a85e410a8968c999f4a42fc022a8b377403e54fe6779138d67aa8"} Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.791397 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-vtc5h" event={"ID":"668764e7-6295-4275-bcc9-24b680ec685f","Type":"ContainerDied","Data":"047c3adf9e0c960c54099b8f9a0a168467b6e29a2c3acff719ceb9bbe1f69c79"} Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.808598 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5b97656f4c-wwx28"] Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.811213 4714 scope.go:117] "RemoveContainer" containerID="0389615fc2e1a33814fa46fa1999a6f01822d1c7cd6bb84e89e87e7102d3acc5" Jan 29 16:30:43 crc kubenswrapper[4714]: E0129 16:30:43.811605 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0389615fc2e1a33814fa46fa1999a6f01822d1c7cd6bb84e89e87e7102d3acc5\": container with ID starting with 0389615fc2e1a33814fa46fa1999a6f01822d1c7cd6bb84e89e87e7102d3acc5 not found: ID does not exist" containerID="0389615fc2e1a33814fa46fa1999a6f01822d1c7cd6bb84e89e87e7102d3acc5" Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.811637 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0389615fc2e1a33814fa46fa1999a6f01822d1c7cd6bb84e89e87e7102d3acc5"} err="failed to get container status \"0389615fc2e1a33814fa46fa1999a6f01822d1c7cd6bb84e89e87e7102d3acc5\": rpc error: code = NotFound desc = could not find container \"0389615fc2e1a33814fa46fa1999a6f01822d1c7cd6bb84e89e87e7102d3acc5\": container with ID starting with 0389615fc2e1a33814fa46fa1999a6f01822d1c7cd6bb84e89e87e7102d3acc5 not found: ID does not exist" Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.811658 4714 scope.go:117] "RemoveContainer" containerID="dc243f10cd3a85e410a8968c999f4a42fc022a8b377403e54fe6779138d67aa8" Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.814218 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5b97656f4c-wwx28"] Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.825211 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-vtc5h"] Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.831054 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-index-vtc5h"] Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.833906 4714 scope.go:117] "RemoveContainer" containerID="dc243f10cd3a85e410a8968c999f4a42fc022a8b377403e54fe6779138d67aa8" Jan 29 16:30:43 crc kubenswrapper[4714]: E0129 16:30:43.834419 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc243f10cd3a85e410a8968c999f4a42fc022a8b377403e54fe6779138d67aa8\": container with ID starting with dc243f10cd3a85e410a8968c999f4a42fc022a8b377403e54fe6779138d67aa8 not found: ID does not exist" containerID="dc243f10cd3a85e410a8968c999f4a42fc022a8b377403e54fe6779138d67aa8" Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.834461 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc243f10cd3a85e410a8968c999f4a42fc022a8b377403e54fe6779138d67aa8"} err="failed to get container status \"dc243f10cd3a85e410a8968c999f4a42fc022a8b377403e54fe6779138d67aa8\": rpc error: code = NotFound desc = could not find container \"dc243f10cd3a85e410a8968c999f4a42fc022a8b377403e54fe6779138d67aa8\": container with ID starting with dc243f10cd3a85e410a8968c999f4a42fc022a8b377403e54fe6779138d67aa8 not found: ID does not exist" Jan 29 16:30:43 crc kubenswrapper[4714]: I0129 16:30:43.834547 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsc27\" (UniqueName: \"kubernetes.io/projected/668764e7-6295-4275-bcc9-24b680ec685f-kube-api-access-wsc27\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:44 crc kubenswrapper[4714]: I0129 16:30:44.192576 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d602ee5-4171-4dc7-9852-88c6019696e1" path="/var/lib/kubelet/pods/5d602ee5-4171-4dc7-9852-88c6019696e1/volumes" Jan 29 16:30:44 crc kubenswrapper[4714]: I0129 16:30:44.193154 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="668764e7-6295-4275-bcc9-24b680ec685f" path="/var/lib/kubelet/pods/668764e7-6295-4275-bcc9-24b680ec685f/volumes" Jan 29 16:30:44 crc kubenswrapper[4714]: I0129 16:30:44.193714 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd598a7a-34ba-4392-908b-c18d89648bb5" path="/var/lib/kubelet/pods/dd598a7a-34ba-4392-908b-c18d89648bb5/volumes" Jan 29 16:30:45 crc kubenswrapper[4714]: I0129 16:30:45.838038 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-xprqx"] Jan 29 16:30:45 crc kubenswrapper[4714]: I0129 16:30:45.838914 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-xprqx" podUID="12c4aba1-9cb7-4f12-bbd5-fee3ccfa0c88" containerName="operator" containerID="cri-o://0c6b038c874feac00a0ea4a05fbd431f7014a4ed8fc71291c23ec972425e646d" gracePeriod=10 Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.145854 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-ddf2f"] Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.146315 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-index-ddf2f" podUID="e6d50b97-e5e2-426e-b881-dfb2077c0838" containerName="registry-server" containerID="cri-o://f6b5016e56cc05cedf6153ecd27ad5475f41f09c64ae9c2dcbdb602e86bd0445" gracePeriod=30 Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.170203 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s"] Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.173229 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590zhd6s"] Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.195736 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74070831-862a-4d0a-83b0-4e3d64891601" path="/var/lib/kubelet/pods/74070831-862a-4d0a-83b0-4e3d64891601/volumes" Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.284257 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-xprqx" Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.374470 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2fkn\" (UniqueName: \"kubernetes.io/projected/12c4aba1-9cb7-4f12-bbd5-fee3ccfa0c88-kube-api-access-p2fkn\") pod \"12c4aba1-9cb7-4f12-bbd5-fee3ccfa0c88\" (UID: \"12c4aba1-9cb7-4f12-bbd5-fee3ccfa0c88\") " Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.392723 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12c4aba1-9cb7-4f12-bbd5-fee3ccfa0c88-kube-api-access-p2fkn" (OuterVolumeSpecName: "kube-api-access-p2fkn") pod "12c4aba1-9cb7-4f12-bbd5-fee3ccfa0c88" (UID: "12c4aba1-9cb7-4f12-bbd5-fee3ccfa0c88"). InnerVolumeSpecName "kube-api-access-p2fkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.476347 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2fkn\" (UniqueName: \"kubernetes.io/projected/12c4aba1-9cb7-4f12-bbd5-fee3ccfa0c88-kube-api-access-p2fkn\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.590426 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-ddf2f" Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.679115 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v42s4\" (UniqueName: \"kubernetes.io/projected/e6d50b97-e5e2-426e-b881-dfb2077c0838-kube-api-access-v42s4\") pod \"e6d50b97-e5e2-426e-b881-dfb2077c0838\" (UID: \"e6d50b97-e5e2-426e-b881-dfb2077c0838\") " Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.687106 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6d50b97-e5e2-426e-b881-dfb2077c0838-kube-api-access-v42s4" (OuterVolumeSpecName: "kube-api-access-v42s4") pod "e6d50b97-e5e2-426e-b881-dfb2077c0838" (UID: "e6d50b97-e5e2-426e-b881-dfb2077c0838"). InnerVolumeSpecName "kube-api-access-v42s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.781222 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v42s4\" (UniqueName: \"kubernetes.io/projected/e6d50b97-e5e2-426e-b881-dfb2077c0838-kube-api-access-v42s4\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.823874 4714 generic.go:334] "Generic (PLEG): container finished" podID="e6d50b97-e5e2-426e-b881-dfb2077c0838" containerID="f6b5016e56cc05cedf6153ecd27ad5475f41f09c64ae9c2dcbdb602e86bd0445" exitCode=0 Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.823925 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-ddf2f" Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.823919 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-ddf2f" event={"ID":"e6d50b97-e5e2-426e-b881-dfb2077c0838","Type":"ContainerDied","Data":"f6b5016e56cc05cedf6153ecd27ad5475f41f09c64ae9c2dcbdb602e86bd0445"} Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.824052 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-ddf2f" event={"ID":"e6d50b97-e5e2-426e-b881-dfb2077c0838","Type":"ContainerDied","Data":"74d0d3468956b4f9354638adf80a846fa26535a7e74e370eeb721012c70bb9d7"} Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.824075 4714 scope.go:117] "RemoveContainer" containerID="f6b5016e56cc05cedf6153ecd27ad5475f41f09c64ae9c2dcbdb602e86bd0445" Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.826115 4714 generic.go:334] "Generic (PLEG): container finished" podID="12c4aba1-9cb7-4f12-bbd5-fee3ccfa0c88" containerID="0c6b038c874feac00a0ea4a05fbd431f7014a4ed8fc71291c23ec972425e646d" exitCode=0 Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.826144 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-xprqx" event={"ID":"12c4aba1-9cb7-4f12-bbd5-fee3ccfa0c88","Type":"ContainerDied","Data":"0c6b038c874feac00a0ea4a05fbd431f7014a4ed8fc71291c23ec972425e646d"} Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.826160 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-xprqx" event={"ID":"12c4aba1-9cb7-4f12-bbd5-fee3ccfa0c88","Type":"ContainerDied","Data":"9cef1a74877ad1a74c47eaad2e3c11a8681670f01de34b07c07c5549def07a12"} Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.827026 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-xprqx" Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.856765 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-ddf2f"] Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.860701 4714 scope.go:117] "RemoveContainer" containerID="f6b5016e56cc05cedf6153ecd27ad5475f41f09c64ae9c2dcbdb602e86bd0445" Jan 29 16:30:46 crc kubenswrapper[4714]: E0129 16:30:46.861096 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6b5016e56cc05cedf6153ecd27ad5475f41f09c64ae9c2dcbdb602e86bd0445\": container with ID starting with f6b5016e56cc05cedf6153ecd27ad5475f41f09c64ae9c2dcbdb602e86bd0445 not found: ID does not exist" containerID="f6b5016e56cc05cedf6153ecd27ad5475f41f09c64ae9c2dcbdb602e86bd0445" Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.861141 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6b5016e56cc05cedf6153ecd27ad5475f41f09c64ae9c2dcbdb602e86bd0445"} err="failed to get container status \"f6b5016e56cc05cedf6153ecd27ad5475f41f09c64ae9c2dcbdb602e86bd0445\": rpc error: code = NotFound desc = could not find container \"f6b5016e56cc05cedf6153ecd27ad5475f41f09c64ae9c2dcbdb602e86bd0445\": container with ID starting with f6b5016e56cc05cedf6153ecd27ad5475f41f09c64ae9c2dcbdb602e86bd0445 not found: ID does not exist" Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.861174 4714 scope.go:117] "RemoveContainer" containerID="0c6b038c874feac00a0ea4a05fbd431f7014a4ed8fc71291c23ec972425e646d" Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.863082 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-ddf2f"] Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.886037 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-xprqx"] Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.888030 4714 scope.go:117] "RemoveContainer" containerID="0c6b038c874feac00a0ea4a05fbd431f7014a4ed8fc71291c23ec972425e646d" Jan 29 16:30:46 crc kubenswrapper[4714]: E0129 16:30:46.888397 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c6b038c874feac00a0ea4a05fbd431f7014a4ed8fc71291c23ec972425e646d\": container with ID starting with 0c6b038c874feac00a0ea4a05fbd431f7014a4ed8fc71291c23ec972425e646d not found: ID does not exist" containerID="0c6b038c874feac00a0ea4a05fbd431f7014a4ed8fc71291c23ec972425e646d" Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.888430 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c6b038c874feac00a0ea4a05fbd431f7014a4ed8fc71291c23ec972425e646d"} err="failed to get container status \"0c6b038c874feac00a0ea4a05fbd431f7014a4ed8fc71291c23ec972425e646d\": rpc error: code = NotFound desc = could not find container \"0c6b038c874feac00a0ea4a05fbd431f7014a4ed8fc71291c23ec972425e646d\": container with ID starting with 0c6b038c874feac00a0ea4a05fbd431f7014a4ed8fc71291c23ec972425e646d not found: ID does not exist" Jan 29 16:30:46 crc kubenswrapper[4714]: I0129 16:30:46.889633 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-xprqx"] Jan 29 16:30:48 crc kubenswrapper[4714]: E0129 16:30:48.186689 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:30:48 crc kubenswrapper[4714]: I0129 16:30:48.194302 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12c4aba1-9cb7-4f12-bbd5-fee3ccfa0c88" path="/var/lib/kubelet/pods/12c4aba1-9cb7-4f12-bbd5-fee3ccfa0c88/volumes" Jan 29 16:30:48 crc kubenswrapper[4714]: I0129 16:30:48.195069 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6d50b97-e5e2-426e-b881-dfb2077c0838" path="/var/lib/kubelet/pods/e6d50b97-e5e2-426e-b881-dfb2077c0838/volumes" Jan 29 16:30:50 crc kubenswrapper[4714]: I0129 16:30:50.559631 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-controller-manager-66f4f5476c-xqnxq"] Jan 29 16:30:50 crc kubenswrapper[4714]: I0129 16:30:50.560128 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-controller-manager-66f4f5476c-xqnxq" podUID="2779e724-225f-4a5f-9e2c-3b05fe08dff2" containerName="manager" containerID="cri-o://cec1cafa23793d2e5f4bd3af8e35a522a2c8cc5c802408ae2ad9896bd189471e" gracePeriod=10 Jan 29 16:30:50 crc kubenswrapper[4714]: I0129 16:30:50.850839 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-9xq82"] Jan 29 16:30:50 crc kubenswrapper[4714]: I0129 16:30:50.851460 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-index-9xq82" podUID="9dcd8561-aa17-46a8-b184-0495c320a33b" containerName="registry-server" containerID="cri-o://17fa7e5da52af363967b26a09672574689ebcabaf095f6c1443af9a04ea81254" gracePeriod=30 Jan 29 16:30:50 crc kubenswrapper[4714]: I0129 16:30:50.854811 4714 generic.go:334] "Generic (PLEG): container finished" podID="2779e724-225f-4a5f-9e2c-3b05fe08dff2" containerID="cec1cafa23793d2e5f4bd3af8e35a522a2c8cc5c802408ae2ad9896bd189471e" exitCode=0 Jan 29 16:30:50 crc kubenswrapper[4714]: I0129 16:30:50.854890 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-66f4f5476c-xqnxq" event={"ID":"2779e724-225f-4a5f-9e2c-3b05fe08dff2","Type":"ContainerDied","Data":"cec1cafa23793d2e5f4bd3af8e35a522a2c8cc5c802408ae2ad9896bd189471e"} Jan 29 16:30:50 crc kubenswrapper[4714]: I0129 16:30:50.921750 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs"] Jan 29 16:30:50 crc kubenswrapper[4714]: I0129 16:30:50.926703 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f757642kvs"] Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.154671 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-66f4f5476c-xqnxq" Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.311498 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-9xq82" Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.341036 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2779e724-225f-4a5f-9e2c-3b05fe08dff2-apiservice-cert\") pod \"2779e724-225f-4a5f-9e2c-3b05fe08dff2\" (UID: \"2779e724-225f-4a5f-9e2c-3b05fe08dff2\") " Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.341115 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2779e724-225f-4a5f-9e2c-3b05fe08dff2-webhook-cert\") pod \"2779e724-225f-4a5f-9e2c-3b05fe08dff2\" (UID: \"2779e724-225f-4a5f-9e2c-3b05fe08dff2\") " Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.341170 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts4kr\" (UniqueName: \"kubernetes.io/projected/9dcd8561-aa17-46a8-b184-0495c320a33b-kube-api-access-ts4kr\") pod \"9dcd8561-aa17-46a8-b184-0495c320a33b\" (UID: \"9dcd8561-aa17-46a8-b184-0495c320a33b\") " Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.341274 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jxvt\" (UniqueName: \"kubernetes.io/projected/2779e724-225f-4a5f-9e2c-3b05fe08dff2-kube-api-access-6jxvt\") pod \"2779e724-225f-4a5f-9e2c-3b05fe08dff2\" (UID: \"2779e724-225f-4a5f-9e2c-3b05fe08dff2\") " Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.345985 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dcd8561-aa17-46a8-b184-0495c320a33b-kube-api-access-ts4kr" (OuterVolumeSpecName: "kube-api-access-ts4kr") pod "9dcd8561-aa17-46a8-b184-0495c320a33b" (UID: "9dcd8561-aa17-46a8-b184-0495c320a33b"). InnerVolumeSpecName "kube-api-access-ts4kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.346085 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2779e724-225f-4a5f-9e2c-3b05fe08dff2-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "2779e724-225f-4a5f-9e2c-3b05fe08dff2" (UID: "2779e724-225f-4a5f-9e2c-3b05fe08dff2"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.347195 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2779e724-225f-4a5f-9e2c-3b05fe08dff2-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "2779e724-225f-4a5f-9e2c-3b05fe08dff2" (UID: "2779e724-225f-4a5f-9e2c-3b05fe08dff2"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.354211 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2779e724-225f-4a5f-9e2c-3b05fe08dff2-kube-api-access-6jxvt" (OuterVolumeSpecName: "kube-api-access-6jxvt") pod "2779e724-225f-4a5f-9e2c-3b05fe08dff2" (UID: "2779e724-225f-4a5f-9e2c-3b05fe08dff2"). InnerVolumeSpecName "kube-api-access-6jxvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.442032 4714 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2779e724-225f-4a5f-9e2c-3b05fe08dff2-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.442073 4714 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2779e724-225f-4a5f-9e2c-3b05fe08dff2-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.442086 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ts4kr\" (UniqueName: \"kubernetes.io/projected/9dcd8561-aa17-46a8-b184-0495c320a33b-kube-api-access-ts4kr\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.442098 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jxvt\" (UniqueName: \"kubernetes.io/projected/2779e724-225f-4a5f-9e2c-3b05fe08dff2-kube-api-access-6jxvt\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.864636 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-66f4f5476c-xqnxq" event={"ID":"2779e724-225f-4a5f-9e2c-3b05fe08dff2","Type":"ContainerDied","Data":"a29d14b8a75b07c5bbb8bc497bdaef3d2bf58ebfc2cc63259d7bb78d82a74639"} Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.864710 4714 scope.go:117] "RemoveContainer" containerID="cec1cafa23793d2e5f4bd3af8e35a522a2c8cc5c802408ae2ad9896bd189471e" Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.864737 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-66f4f5476c-xqnxq" Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.867055 4714 generic.go:334] "Generic (PLEG): container finished" podID="9dcd8561-aa17-46a8-b184-0495c320a33b" containerID="17fa7e5da52af363967b26a09672574689ebcabaf095f6c1443af9a04ea81254" exitCode=0 Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.867096 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-9xq82" event={"ID":"9dcd8561-aa17-46a8-b184-0495c320a33b","Type":"ContainerDied","Data":"17fa7e5da52af363967b26a09672574689ebcabaf095f6c1443af9a04ea81254"} Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.867125 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-9xq82" event={"ID":"9dcd8561-aa17-46a8-b184-0495c320a33b","Type":"ContainerDied","Data":"b25ca92cca2c102702aabf44744c607b11ac82433ac2ea8b9c60134c6952d1ea"} Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.867122 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-9xq82" Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.903761 4714 scope.go:117] "RemoveContainer" containerID="17fa7e5da52af363967b26a09672574689ebcabaf095f6c1443af9a04ea81254" Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.904644 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-9xq82"] Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.911645 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-index-9xq82"] Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.925250 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-controller-manager-66f4f5476c-xqnxq"] Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.931756 4714 scope.go:117] "RemoveContainer" containerID="17fa7e5da52af363967b26a09672574689ebcabaf095f6c1443af9a04ea81254" Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.932198 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-controller-manager-66f4f5476c-xqnxq"] Jan 29 16:30:51 crc kubenswrapper[4714]: E0129 16:30:51.932296 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17fa7e5da52af363967b26a09672574689ebcabaf095f6c1443af9a04ea81254\": container with ID starting with 17fa7e5da52af363967b26a09672574689ebcabaf095f6c1443af9a04ea81254 not found: ID does not exist" containerID="17fa7e5da52af363967b26a09672574689ebcabaf095f6c1443af9a04ea81254" Jan 29 16:30:51 crc kubenswrapper[4714]: I0129 16:30:51.932333 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17fa7e5da52af363967b26a09672574689ebcabaf095f6c1443af9a04ea81254"} err="failed to get container status \"17fa7e5da52af363967b26a09672574689ebcabaf095f6c1443af9a04ea81254\": rpc error: code = NotFound desc = could not find container \"17fa7e5da52af363967b26a09672574689ebcabaf095f6c1443af9a04ea81254\": container with ID starting with 17fa7e5da52af363967b26a09672574689ebcabaf095f6c1443af9a04ea81254 not found: ID does not exist" Jan 29 16:30:52 crc kubenswrapper[4714]: I0129 16:30:52.190920 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="096bd691-cca6-4566-b56c-7643e2feaef1" path="/var/lib/kubelet/pods/096bd691-cca6-4566-b56c-7643e2feaef1/volumes" Jan 29 16:30:52 crc kubenswrapper[4714]: I0129 16:30:52.191565 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2779e724-225f-4a5f-9e2c-3b05fe08dff2" path="/var/lib/kubelet/pods/2779e724-225f-4a5f-9e2c-3b05fe08dff2/volumes" Jan 29 16:30:52 crc kubenswrapper[4714]: I0129 16:30:52.192173 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dcd8561-aa17-46a8-b184-0495c320a33b" path="/var/lib/kubelet/pods/9dcd8561-aa17-46a8-b184-0495c320a33b/volumes" Jan 29 16:30:52 crc kubenswrapper[4714]: I0129 16:30:52.446499 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7cc56799bb-ddchn"] Jan 29 16:30:52 crc kubenswrapper[4714]: I0129 16:30:52.446773 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-controller-manager-7cc56799bb-ddchn" podUID="8771e447-1cf7-43f9-bfab-6c1afd7476dc" containerName="manager" containerID="cri-o://265cc03bd32fdd618d4ab75713d4df2f554d8e322232952f23e67ae7895f0208" gracePeriod=10 Jan 29 16:30:52 crc kubenswrapper[4714]: I0129 16:30:52.743297 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-l6dkm"] Jan 29 16:30:52 crc kubenswrapper[4714]: I0129 16:30:52.743487 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-l6dkm" podUID="d990dfb7-e078-4c7e-8e98-40b10f062a04" containerName="registry-server" containerID="cri-o://0020667ef371fcb5a3d00febffc3770f6cb20130544e17735a3ccff225db36b3" gracePeriod=30 Jan 29 16:30:52 crc kubenswrapper[4714]: I0129 16:30:52.783883 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp"] Jan 29 16:30:52 crc kubenswrapper[4714]: I0129 16:30:52.796654 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40zbdnp"] Jan 29 16:30:52 crc kubenswrapper[4714]: I0129 16:30:52.885361 4714 generic.go:334] "Generic (PLEG): container finished" podID="d990dfb7-e078-4c7e-8e98-40b10f062a04" containerID="0020667ef371fcb5a3d00febffc3770f6cb20130544e17735a3ccff225db36b3" exitCode=0 Jan 29 16:30:52 crc kubenswrapper[4714]: I0129 16:30:52.885480 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-l6dkm" event={"ID":"d990dfb7-e078-4c7e-8e98-40b10f062a04","Type":"ContainerDied","Data":"0020667ef371fcb5a3d00febffc3770f6cb20130544e17735a3ccff225db36b3"} Jan 29 16:30:52 crc kubenswrapper[4714]: I0129 16:30:52.892864 4714 generic.go:334] "Generic (PLEG): container finished" podID="8771e447-1cf7-43f9-bfab-6c1afd7476dc" containerID="265cc03bd32fdd618d4ab75713d4df2f554d8e322232952f23e67ae7895f0208" exitCode=0 Jan 29 16:30:52 crc kubenswrapper[4714]: I0129 16:30:52.892973 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7cc56799bb-ddchn" event={"ID":"8771e447-1cf7-43f9-bfab-6c1afd7476dc","Type":"ContainerDied","Data":"265cc03bd32fdd618d4ab75713d4df2f554d8e322232952f23e67ae7895f0208"} Jan 29 16:30:52 crc kubenswrapper[4714]: I0129 16:30:52.893009 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7cc56799bb-ddchn" event={"ID":"8771e447-1cf7-43f9-bfab-6c1afd7476dc","Type":"ContainerDied","Data":"4ead9770f8d819bd3b2a9d514caf8dd18e92463bd9f126ba95d1cbb0e58fb71f"} Jan 29 16:30:52 crc kubenswrapper[4714]: I0129 16:30:52.893057 4714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ead9770f8d819bd3b2a9d514caf8dd18e92463bd9f126ba95d1cbb0e58fb71f" Jan 29 16:30:52 crc kubenswrapper[4714]: I0129 16:30:52.903720 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7cc56799bb-ddchn" Jan 29 16:30:52 crc kubenswrapper[4714]: I0129 16:30:52.962992 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8771e447-1cf7-43f9-bfab-6c1afd7476dc-apiservice-cert\") pod \"8771e447-1cf7-43f9-bfab-6c1afd7476dc\" (UID: \"8771e447-1cf7-43f9-bfab-6c1afd7476dc\") " Jan 29 16:30:52 crc kubenswrapper[4714]: I0129 16:30:52.963233 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx2m7\" (UniqueName: \"kubernetes.io/projected/8771e447-1cf7-43f9-bfab-6c1afd7476dc-kube-api-access-mx2m7\") pod \"8771e447-1cf7-43f9-bfab-6c1afd7476dc\" (UID: \"8771e447-1cf7-43f9-bfab-6c1afd7476dc\") " Jan 29 16:30:52 crc kubenswrapper[4714]: I0129 16:30:52.963279 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8771e447-1cf7-43f9-bfab-6c1afd7476dc-webhook-cert\") pod \"8771e447-1cf7-43f9-bfab-6c1afd7476dc\" (UID: \"8771e447-1cf7-43f9-bfab-6c1afd7476dc\") " Jan 29 16:30:52 crc kubenswrapper[4714]: I0129 16:30:52.969370 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8771e447-1cf7-43f9-bfab-6c1afd7476dc-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "8771e447-1cf7-43f9-bfab-6c1afd7476dc" (UID: "8771e447-1cf7-43f9-bfab-6c1afd7476dc"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:52 crc kubenswrapper[4714]: I0129 16:30:52.969456 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8771e447-1cf7-43f9-bfab-6c1afd7476dc-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "8771e447-1cf7-43f9-bfab-6c1afd7476dc" (UID: "8771e447-1cf7-43f9-bfab-6c1afd7476dc"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:52 crc kubenswrapper[4714]: I0129 16:30:52.970823 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8771e447-1cf7-43f9-bfab-6c1afd7476dc-kube-api-access-mx2m7" (OuterVolumeSpecName: "kube-api-access-mx2m7") pod "8771e447-1cf7-43f9-bfab-6c1afd7476dc" (UID: "8771e447-1cf7-43f9-bfab-6c1afd7476dc"). InnerVolumeSpecName "kube-api-access-mx2m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:53 crc kubenswrapper[4714]: I0129 16:30:53.064268 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx2m7\" (UniqueName: \"kubernetes.io/projected/8771e447-1cf7-43f9-bfab-6c1afd7476dc-kube-api-access-mx2m7\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:53 crc kubenswrapper[4714]: I0129 16:30:53.064303 4714 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8771e447-1cf7-43f9-bfab-6c1afd7476dc-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:53 crc kubenswrapper[4714]: I0129 16:30:53.064314 4714 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8771e447-1cf7-43f9-bfab-6c1afd7476dc-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:53 crc kubenswrapper[4714]: I0129 16:30:53.103421 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-l6dkm" Jan 29 16:30:53 crc kubenswrapper[4714]: I0129 16:30:53.266791 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r4sv\" (UniqueName: \"kubernetes.io/projected/d990dfb7-e078-4c7e-8e98-40b10f062a04-kube-api-access-8r4sv\") pod \"d990dfb7-e078-4c7e-8e98-40b10f062a04\" (UID: \"d990dfb7-e078-4c7e-8e98-40b10f062a04\") " Jan 29 16:30:53 crc kubenswrapper[4714]: I0129 16:30:53.269102 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d990dfb7-e078-4c7e-8e98-40b10f062a04-kube-api-access-8r4sv" (OuterVolumeSpecName: "kube-api-access-8r4sv") pod "d990dfb7-e078-4c7e-8e98-40b10f062a04" (UID: "d990dfb7-e078-4c7e-8e98-40b10f062a04"). InnerVolumeSpecName "kube-api-access-8r4sv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:53 crc kubenswrapper[4714]: I0129 16:30:53.368855 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r4sv\" (UniqueName: \"kubernetes.io/projected/d990dfb7-e078-4c7e-8e98-40b10f062a04-kube-api-access-8r4sv\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:53 crc kubenswrapper[4714]: I0129 16:30:53.900984 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-l6dkm" event={"ID":"d990dfb7-e078-4c7e-8e98-40b10f062a04","Type":"ContainerDied","Data":"cd19708e51ad0ae38749b06c635286e93f2554a3fecdeb70a36c4d4f40376c94"} Jan 29 16:30:53 crc kubenswrapper[4714]: I0129 16:30:53.901012 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-l6dkm" Jan 29 16:30:53 crc kubenswrapper[4714]: I0129 16:30:53.901043 4714 scope.go:117] "RemoveContainer" containerID="0020667ef371fcb5a3d00febffc3770f6cb20130544e17735a3ccff225db36b3" Jan 29 16:30:53 crc kubenswrapper[4714]: I0129 16:30:53.901005 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7cc56799bb-ddchn" Jan 29 16:30:53 crc kubenswrapper[4714]: I0129 16:30:53.944036 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7cc56799bb-ddchn"] Jan 29 16:30:53 crc kubenswrapper[4714]: I0129 16:30:53.950792 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7cc56799bb-ddchn"] Jan 29 16:30:53 crc kubenswrapper[4714]: I0129 16:30:53.959386 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-l6dkm"] Jan 29 16:30:53 crc kubenswrapper[4714]: I0129 16:30:53.962610 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-l6dkm"] Jan 29 16:30:54 crc kubenswrapper[4714]: I0129 16:30:54.194892 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8771e447-1cf7-43f9-bfab-6c1afd7476dc" path="/var/lib/kubelet/pods/8771e447-1cf7-43f9-bfab-6c1afd7476dc/volumes" Jan 29 16:30:54 crc kubenswrapper[4714]: I0129 16:30:54.196587 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="949d7185-7f54-44dd-9da9-3ed2c3c80e31" path="/var/lib/kubelet/pods/949d7185-7f54-44dd-9da9-3ed2c3c80e31/volumes" Jan 29 16:30:54 crc kubenswrapper[4714]: I0129 16:30:54.197322 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d990dfb7-e078-4c7e-8e98-40b10f062a04" path="/var/lib/kubelet/pods/d990dfb7-e078-4c7e-8e98-40b10f062a04/volumes" Jan 29 16:30:54 crc kubenswrapper[4714]: I0129 16:30:54.785094 4714 scope.go:117] "RemoveContainer" containerID="a2afc6d59d0b69e82adfd2f0ef885392ef80db9eed2c4295483123972e972c1a" Jan 29 16:30:54 crc kubenswrapper[4714]: I0129 16:30:54.822526 4714 scope.go:117] "RemoveContainer" containerID="bfc5719dfa5a492d30c2a6be943eb0655e0af9b3224bd4745d65e5929dc3407a" Jan 29 16:30:54 crc kubenswrapper[4714]: I0129 16:30:54.856101 4714 scope.go:117] "RemoveContainer" containerID="6ee4a0f3d055cfa18b2c55afd177524902b4dc64f544a61af9b1f46505e17336" Jan 29 16:30:54 crc kubenswrapper[4714]: I0129 16:30:54.879881 4714 scope.go:117] "RemoveContainer" containerID="265cc03bd32fdd618d4ab75713d4df2f554d8e322232952f23e67ae7895f0208" Jan 29 16:30:57 crc kubenswrapper[4714]: I0129 16:30:57.844276 4714 patch_prober.go:28] interesting pod/machine-config-daemon-ppngk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:30:57 crc kubenswrapper[4714]: I0129 16:30:57.846502 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:30:57 crc kubenswrapper[4714]: I0129 16:30:57.846711 4714 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" Jan 29 16:30:57 crc kubenswrapper[4714]: I0129 16:30:57.847809 4714 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6d286411f160a5fdbd13efa6bbfae544ec01e44f19ea6b8ff05d4ab9953a5f4a"} pod="openshift-machine-config-operator/machine-config-daemon-ppngk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:30:57 crc kubenswrapper[4714]: I0129 16:30:57.848595 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" containerID="cri-o://6d286411f160a5fdbd13efa6bbfae544ec01e44f19ea6b8ff05d4ab9953a5f4a" gracePeriod=600 Jan 29 16:30:58 crc kubenswrapper[4714]: I0129 16:30:58.946217 4714 generic.go:334] "Generic (PLEG): container finished" podID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerID="6d286411f160a5fdbd13efa6bbfae544ec01e44f19ea6b8ff05d4ab9953a5f4a" exitCode=0 Jan 29 16:30:58 crc kubenswrapper[4714]: I0129 16:30:58.946289 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" event={"ID":"c8c765f3-89eb-4077-8829-03e86eb0c90c","Type":"ContainerDied","Data":"6d286411f160a5fdbd13efa6bbfae544ec01e44f19ea6b8ff05d4ab9953a5f4a"} Jan 29 16:30:58 crc kubenswrapper[4714]: I0129 16:30:58.946393 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" event={"ID":"c8c765f3-89eb-4077-8829-03e86eb0c90c","Type":"ContainerStarted","Data":"28ae6797628a288c954e7899195453697f32c3fca947d19910c2ccc63b246a5b"} Jan 29 16:30:58 crc kubenswrapper[4714]: I0129 16:30:58.946474 4714 scope.go:117] "RemoveContainer" containerID="77045db0ac9dbee23fe648e58207222e15e50d5178fcc5cc7a606b4bbe2af7ec" Jan 29 16:31:02 crc kubenswrapper[4714]: E0129 16:31:02.186918 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.802268 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qvwwf/must-gather-x95cf"] Jan 29 16:31:05 crc kubenswrapper[4714]: E0129 16:31:05.802673 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc06a535-6f60-438e-b52d-5dc90fae8c67" containerName="keystone-api" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.802683 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc06a535-6f60-438e-b52d-5dc90fae8c67" containerName="keystone-api" Jan 29 16:31:05 crc kubenswrapper[4714]: E0129 16:31:05.802698 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aa31790-7a3c-4a66-aace-c087c0221c6b" containerName="manager" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.802704 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aa31790-7a3c-4a66-aace-c087c0221c6b" containerName="manager" Jan 29 16:31:05 crc kubenswrapper[4714]: E0129 16:31:05.802717 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2779e724-225f-4a5f-9e2c-3b05fe08dff2" containerName="manager" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.802722 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="2779e724-225f-4a5f-9e2c-3b05fe08dff2" containerName="manager" Jan 29 16:31:05 crc kubenswrapper[4714]: E0129 16:31:05.802731 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8d336f2-b190-4e32-be3a-27fbf0e50a06" containerName="galera" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.802738 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8d336f2-b190-4e32-be3a-27fbf0e50a06" containerName="galera" Jan 29 16:31:05 crc kubenswrapper[4714]: E0129 16:31:05.802744 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea" containerName="memcached" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.802750 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea" containerName="memcached" Jan 29 16:31:05 crc kubenswrapper[4714]: E0129 16:31:05.802758 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8d336f2-b190-4e32-be3a-27fbf0e50a06" containerName="mysql-bootstrap" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.802763 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8d336f2-b190-4e32-be3a-27fbf0e50a06" containerName="mysql-bootstrap" Jan 29 16:31:05 crc kubenswrapper[4714]: E0129 16:31:05.802771 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55e23ac1-a89b-4689-a17d-bee875f7783e" containerName="rabbitmq" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.802776 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="55e23ac1-a89b-4689-a17d-bee875f7783e" containerName="rabbitmq" Jan 29 16:31:05 crc kubenswrapper[4714]: E0129 16:31:05.802785 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e367e739-45d9-4c71-82fa-ecda02da3277" containerName="galera" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.802790 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="e367e739-45d9-4c71-82fa-ecda02da3277" containerName="galera" Jan 29 16:31:05 crc kubenswrapper[4714]: E0129 16:31:05.802797 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27f02c1-a7d5-4d49-838b-df5445720a07" containerName="galera" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.802802 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27f02c1-a7d5-4d49-838b-df5445720a07" containerName="galera" Jan 29 16:31:05 crc kubenswrapper[4714]: E0129 16:31:05.802810 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27f02c1-a7d5-4d49-838b-df5445720a07" containerName="mysql-bootstrap" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.802816 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27f02c1-a7d5-4d49-838b-df5445720a07" containerName="mysql-bootstrap" Jan 29 16:31:05 crc kubenswrapper[4714]: E0129 16:31:05.802823 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d602ee5-4171-4dc7-9852-88c6019696e1" containerName="manager" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.802829 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d602ee5-4171-4dc7-9852-88c6019696e1" containerName="manager" Jan 29 16:31:05 crc kubenswrapper[4714]: E0129 16:31:05.802836 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87506df3-b56a-4598-8309-e865dc93cf53" containerName="registry-server" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.802841 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="87506df3-b56a-4598-8309-e865dc93cf53" containerName="registry-server" Jan 29 16:31:05 crc kubenswrapper[4714]: E0129 16:31:05.802848 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12c4aba1-9cb7-4f12-bbd5-fee3ccfa0c88" containerName="operator" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.802853 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="12c4aba1-9cb7-4f12-bbd5-fee3ccfa0c88" containerName="operator" Jan 29 16:31:05 crc kubenswrapper[4714]: E0129 16:31:05.802860 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112a1cde-5990-4140-97dd-c2bbc4f73197" containerName="mariadb-account-delete" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.802868 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="112a1cde-5990-4140-97dd-c2bbc4f73197" containerName="mariadb-account-delete" Jan 29 16:31:05 crc kubenswrapper[4714]: E0129 16:31:05.802877 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55e23ac1-a89b-4689-a17d-bee875f7783e" containerName="setup-container" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.802882 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="55e23ac1-a89b-4689-a17d-bee875f7783e" containerName="setup-container" Jan 29 16:31:05 crc kubenswrapper[4714]: E0129 16:31:05.802891 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112a1cde-5990-4140-97dd-c2bbc4f73197" containerName="mariadb-account-delete" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.802898 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="112a1cde-5990-4140-97dd-c2bbc4f73197" containerName="mariadb-account-delete" Jan 29 16:31:05 crc kubenswrapper[4714]: E0129 16:31:05.802904 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d990dfb7-e078-4c7e-8e98-40b10f062a04" containerName="registry-server" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.802910 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="d990dfb7-e078-4c7e-8e98-40b10f062a04" containerName="registry-server" Jan 29 16:31:05 crc kubenswrapper[4714]: E0129 16:31:05.802918 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668764e7-6295-4275-bcc9-24b680ec685f" containerName="registry-server" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.802923 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="668764e7-6295-4275-bcc9-24b680ec685f" containerName="registry-server" Jan 29 16:31:05 crc kubenswrapper[4714]: E0129 16:31:05.802944 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e367e739-45d9-4c71-82fa-ecda02da3277" containerName="mysql-bootstrap" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.802949 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="e367e739-45d9-4c71-82fa-ecda02da3277" containerName="mysql-bootstrap" Jan 29 16:31:05 crc kubenswrapper[4714]: E0129 16:31:05.802957 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d50b97-e5e2-426e-b881-dfb2077c0838" containerName="registry-server" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.802962 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d50b97-e5e2-426e-b881-dfb2077c0838" containerName="registry-server" Jan 29 16:31:05 crc kubenswrapper[4714]: E0129 16:31:05.802970 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dcd8561-aa17-46a8-b184-0495c320a33b" containerName="registry-server" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.802975 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dcd8561-aa17-46a8-b184-0495c320a33b" containerName="registry-server" Jan 29 16:31:05 crc kubenswrapper[4714]: E0129 16:31:05.802984 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8771e447-1cf7-43f9-bfab-6c1afd7476dc" containerName="manager" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.802989 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="8771e447-1cf7-43f9-bfab-6c1afd7476dc" containerName="manager" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.803072 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="112a1cde-5990-4140-97dd-c2bbc4f73197" containerName="mariadb-account-delete" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.803082 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8d336f2-b190-4e32-be3a-27fbf0e50a06" containerName="galera" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.803091 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="12c4aba1-9cb7-4f12-bbd5-fee3ccfa0c88" containerName="operator" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.803100 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="e27f02c1-a7d5-4d49-838b-df5445720a07" containerName="galera" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.803107 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="55e23ac1-a89b-4689-a17d-bee875f7783e" containerName="rabbitmq" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.803114 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="e367e739-45d9-4c71-82fa-ecda02da3277" containerName="galera" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.803122 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0b0a6c4-5abe-4f28-9e81-56fcbf92f2ea" containerName="memcached" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.803129 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="668764e7-6295-4275-bcc9-24b680ec685f" containerName="registry-server" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.803136 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc06a535-6f60-438e-b52d-5dc90fae8c67" containerName="keystone-api" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.803146 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="87506df3-b56a-4598-8309-e865dc93cf53" containerName="registry-server" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.803151 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="8771e447-1cf7-43f9-bfab-6c1afd7476dc" containerName="manager" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.803159 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aa31790-7a3c-4a66-aace-c087c0221c6b" containerName="manager" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.803166 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6d50b97-e5e2-426e-b881-dfb2077c0838" containerName="registry-server" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.803174 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dcd8561-aa17-46a8-b184-0495c320a33b" containerName="registry-server" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.803180 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="2779e724-225f-4a5f-9e2c-3b05fe08dff2" containerName="manager" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.803186 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d602ee5-4171-4dc7-9852-88c6019696e1" containerName="manager" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.803193 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="d990dfb7-e078-4c7e-8e98-40b10f062a04" containerName="registry-server" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.803362 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="112a1cde-5990-4140-97dd-c2bbc4f73197" containerName="mariadb-account-delete" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.803708 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qvwwf/must-gather-x95cf" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.807067 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qvwwf"/"default-dockercfg-9z4tm" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.807099 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qvwwf"/"kube-root-ca.crt" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.807466 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qvwwf"/"openshift-service-ca.crt" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.810328 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qvwwf/must-gather-x95cf"] Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.949716 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm9z8\" (UniqueName: \"kubernetes.io/projected/7bff1b3a-4d70-4c22-ab8c-406d7e147f74-kube-api-access-lm9z8\") pod \"must-gather-x95cf\" (UID: \"7bff1b3a-4d70-4c22-ab8c-406d7e147f74\") " pod="openshift-must-gather-qvwwf/must-gather-x95cf" Jan 29 16:31:05 crc kubenswrapper[4714]: I0129 16:31:05.949867 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7bff1b3a-4d70-4c22-ab8c-406d7e147f74-must-gather-output\") pod \"must-gather-x95cf\" (UID: \"7bff1b3a-4d70-4c22-ab8c-406d7e147f74\") " pod="openshift-must-gather-qvwwf/must-gather-x95cf" Jan 29 16:31:06 crc kubenswrapper[4714]: I0129 16:31:06.051220 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7bff1b3a-4d70-4c22-ab8c-406d7e147f74-must-gather-output\") pod \"must-gather-x95cf\" (UID: \"7bff1b3a-4d70-4c22-ab8c-406d7e147f74\") " pod="openshift-must-gather-qvwwf/must-gather-x95cf" Jan 29 16:31:06 crc kubenswrapper[4714]: I0129 16:31:06.051264 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm9z8\" (UniqueName: \"kubernetes.io/projected/7bff1b3a-4d70-4c22-ab8c-406d7e147f74-kube-api-access-lm9z8\") pod \"must-gather-x95cf\" (UID: \"7bff1b3a-4d70-4c22-ab8c-406d7e147f74\") " pod="openshift-must-gather-qvwwf/must-gather-x95cf" Jan 29 16:31:06 crc kubenswrapper[4714]: I0129 16:31:06.051687 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7bff1b3a-4d70-4c22-ab8c-406d7e147f74-must-gather-output\") pod \"must-gather-x95cf\" (UID: \"7bff1b3a-4d70-4c22-ab8c-406d7e147f74\") " pod="openshift-must-gather-qvwwf/must-gather-x95cf" Jan 29 16:31:06 crc kubenswrapper[4714]: I0129 16:31:06.068767 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm9z8\" (UniqueName: \"kubernetes.io/projected/7bff1b3a-4d70-4c22-ab8c-406d7e147f74-kube-api-access-lm9z8\") pod \"must-gather-x95cf\" (UID: \"7bff1b3a-4d70-4c22-ab8c-406d7e147f74\") " pod="openshift-must-gather-qvwwf/must-gather-x95cf" Jan 29 16:31:06 crc kubenswrapper[4714]: I0129 16:31:06.118138 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qvwwf/must-gather-x95cf" Jan 29 16:31:06 crc kubenswrapper[4714]: I0129 16:31:06.503193 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qvwwf/must-gather-x95cf"] Jan 29 16:31:07 crc kubenswrapper[4714]: I0129 16:31:07.007073 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qvwwf/must-gather-x95cf" event={"ID":"7bff1b3a-4d70-4c22-ab8c-406d7e147f74","Type":"ContainerStarted","Data":"507998147d3c1a984a1172e0e4c5307332cf30f147bf5ae992a56dd0e9660e72"} Jan 29 16:31:13 crc kubenswrapper[4714]: I0129 16:31:13.062293 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qvwwf/must-gather-x95cf" event={"ID":"7bff1b3a-4d70-4c22-ab8c-406d7e147f74","Type":"ContainerStarted","Data":"f6d6162656c64056efbbd383a714625735662747cd4e794e964ea668dbf260b4"} Jan 29 16:31:13 crc kubenswrapper[4714]: I0129 16:31:13.062899 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qvwwf/must-gather-x95cf" event={"ID":"7bff1b3a-4d70-4c22-ab8c-406d7e147f74","Type":"ContainerStarted","Data":"2547fd88c31e3900e045cc8b73057acb07ad9efeb1d1b5a29b5a911c036ec93d"} Jan 29 16:31:13 crc kubenswrapper[4714]: I0129 16:31:13.087622 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qvwwf/must-gather-x95cf" podStartSLOduration=2.130600968 podStartE2EDuration="8.087603487s" podCreationTimestamp="2026-01-29 16:31:05 +0000 UTC" firstStartedPulling="2026-01-29 16:31:06.516701315 +0000 UTC m=+1273.037202485" lastFinishedPulling="2026-01-29 16:31:12.473703884 +0000 UTC m=+1278.994205004" observedRunningTime="2026-01-29 16:31:13.085108265 +0000 UTC m=+1279.605609385" watchObservedRunningTime="2026-01-29 16:31:13.087603487 +0000 UTC m=+1279.608104607" Jan 29 16:31:13 crc kubenswrapper[4714]: E0129 16:31:13.184884 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:31:25 crc kubenswrapper[4714]: E0129 16:31:25.186459 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:31:37 crc kubenswrapper[4714]: E0129 16:31:37.346802 4714 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:31:37 crc kubenswrapper[4714]: E0129 16:31:37.347581 4714 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tmm88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wdwq5_openshift-marketplace(8c12ad14-f878-42a1-a168-bad4026ec2dd): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:31:37 crc kubenswrapper[4714]: E0129 16:31:37.348775 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:31:50 crc kubenswrapper[4714]: E0129 16:31:50.185985 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:31:55 crc kubenswrapper[4714]: I0129 16:31:55.314801 4714 scope.go:117] "RemoveContainer" containerID="bf336482d3003324aec4b339b20443d78b6e477a15e7d92b43bd42b82e826811" Jan 29 16:31:55 crc kubenswrapper[4714]: I0129 16:31:55.353170 4714 scope.go:117] "RemoveContainer" containerID="1cd564e322fa134ca0b1e14f7b1c05c0de31a1e7c8f443cd6b64bbf340b9a6ae" Jan 29 16:31:55 crc kubenswrapper[4714]: I0129 16:31:55.377331 4714 scope.go:117] "RemoveContainer" containerID="8be88780f2ccaa67529f7c97f45a315a79167378ebf7e1fdcde84818ec246373" Jan 29 16:31:55 crc kubenswrapper[4714]: I0129 16:31:55.398850 4714 scope.go:117] "RemoveContainer" containerID="08be91d5ade94d67396b725df7d3290e5e0b4eed8f678b830b5a24bf0aefb822" Jan 29 16:31:55 crc kubenswrapper[4714]: I0129 16:31:55.424807 4714 scope.go:117] "RemoveContainer" containerID="3bc2a2089f6be296701e5d57c79cbb2a9a2dd560e4db7f4ea6460bad3386ed41" Jan 29 16:31:55 crc kubenswrapper[4714]: I0129 16:31:55.437198 4714 scope.go:117] "RemoveContainer" containerID="9a1454161993efe4d3d18d4c054f92025122ffa043acc6e49d820a2c93adec47" Jan 29 16:32:00 crc kubenswrapper[4714]: I0129 16:32:00.814956 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-sq9mx_8062d225-aa57-48df-bf28-2254ecc4f635/control-plane-machine-set-operator/0.log" Jan 29 16:32:00 crc kubenswrapper[4714]: I0129 16:32:00.963674 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-z4h55_bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92/kube-rbac-proxy/0.log" Jan 29 16:32:01 crc kubenswrapper[4714]: I0129 16:32:01.002172 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-z4h55_bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92/machine-api-operator/0.log" Jan 29 16:32:05 crc kubenswrapper[4714]: E0129 16:32:05.185653 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:32:19 crc kubenswrapper[4714]: E0129 16:32:19.185890 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:32:28 crc kubenswrapper[4714]: I0129 16:32:28.127634 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-m26zh_78b34628-144f-416a-b493-15ba445caa48/controller/0.log" Jan 29 16:32:28 crc kubenswrapper[4714]: I0129 16:32:28.133581 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-m26zh_78b34628-144f-416a-b493-15ba445caa48/kube-rbac-proxy/0.log" Jan 29 16:32:28 crc kubenswrapper[4714]: I0129 16:32:28.374408 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/cp-frr-files/0.log" Jan 29 16:32:28 crc kubenswrapper[4714]: I0129 16:32:28.487912 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/cp-frr-files/0.log" Jan 29 16:32:28 crc kubenswrapper[4714]: I0129 16:32:28.519802 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/cp-reloader/0.log" Jan 29 16:32:28 crc kubenswrapper[4714]: I0129 16:32:28.569503 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/cp-metrics/0.log" Jan 29 16:32:28 crc kubenswrapper[4714]: I0129 16:32:28.585684 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/cp-reloader/0.log" Jan 29 16:32:28 crc kubenswrapper[4714]: I0129 16:32:28.740717 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/cp-reloader/0.log" Jan 29 16:32:28 crc kubenswrapper[4714]: I0129 16:32:28.747540 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/cp-frr-files/0.log" Jan 29 16:32:28 crc kubenswrapper[4714]: I0129 16:32:28.782249 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/cp-metrics/0.log" Jan 29 16:32:28 crc kubenswrapper[4714]: I0129 16:32:28.784365 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/cp-metrics/0.log" Jan 29 16:32:28 crc kubenswrapper[4714]: I0129 16:32:28.950366 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/cp-metrics/0.log" Jan 29 16:32:28 crc kubenswrapper[4714]: I0129 16:32:28.952061 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/cp-reloader/0.log" Jan 29 16:32:28 crc kubenswrapper[4714]: I0129 16:32:28.963845 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/cp-frr-files/0.log" Jan 29 16:32:28 crc kubenswrapper[4714]: I0129 16:32:28.991200 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/controller/0.log" Jan 29 16:32:29 crc kubenswrapper[4714]: I0129 16:32:29.102396 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/frr-metrics/0.log" Jan 29 16:32:29 crc kubenswrapper[4714]: I0129 16:32:29.149036 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/kube-rbac-proxy/0.log" Jan 29 16:32:29 crc kubenswrapper[4714]: I0129 16:32:29.184042 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/kube-rbac-proxy-frr/0.log" Jan 29 16:32:29 crc kubenswrapper[4714]: I0129 16:32:29.322733 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/reloader/0.log" Jan 29 16:32:29 crc kubenswrapper[4714]: I0129 16:32:29.391069 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-kk79r_9bbfcf92-8a27-4ba0-9017-7c36906791c8/frr-k8s-webhook-server/0.log" Jan 29 16:32:29 crc kubenswrapper[4714]: I0129 16:32:29.513901 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/frr/0.log" Jan 29 16:32:29 crc kubenswrapper[4714]: I0129 16:32:29.588502 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-586b87b897-zpr4q_432a4f98-877c-4f7a-b2b0-ce273a77450a/manager/0.log" Jan 29 16:32:29 crc kubenswrapper[4714]: I0129 16:32:29.834968 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7df7c8d444-xs67n_ffe179b8-a1c8-430b-94f5-920aacf0defe/webhook-server/0.log" Jan 29 16:32:29 crc kubenswrapper[4714]: I0129 16:32:29.874589 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7mmsh_813f735d-8336-49e9-b018-e6dbf74ddc99/kube-rbac-proxy/0.log" Jan 29 16:32:29 crc kubenswrapper[4714]: I0129 16:32:29.989308 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7mmsh_813f735d-8336-49e9-b018-e6dbf74ddc99/speaker/0.log" Jan 29 16:32:31 crc kubenswrapper[4714]: E0129 16:32:31.186229 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:32:43 crc kubenswrapper[4714]: E0129 16:32:43.187799 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:32:55 crc kubenswrapper[4714]: I0129 16:32:55.485290 4714 scope.go:117] "RemoveContainer" containerID="e9b290e5fae1b9ebd91874f0c7f54baf70c50604b0924e9b333424187e1578aa" Jan 29 16:32:55 crc kubenswrapper[4714]: I0129 16:32:55.509809 4714 scope.go:117] "RemoveContainer" containerID="ae1e8fd69fe054dba679bc3d816ff2486311c7fd767f49bb3c77b8a2f9da9054" Jan 29 16:32:55 crc kubenswrapper[4714]: I0129 16:32:55.526246 4714 scope.go:117] "RemoveContainer" containerID="cdd9e5087dbe100b4200dc045cc6536b7bd5644b604c1ad48cd724f12116a2d5" Jan 29 16:32:55 crc kubenswrapper[4714]: I0129 16:32:55.575304 4714 scope.go:117] "RemoveContainer" containerID="a293414bfe6fecbd34f2097ce525abe6d50f764495aa3e4545c1f2cdb4d889ff" Jan 29 16:32:55 crc kubenswrapper[4714]: I0129 16:32:55.947143 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st_c184c6f2-1af5-4f70-9251-6beb2baae06b/util/0.log" Jan 29 16:32:56 crc kubenswrapper[4714]: I0129 16:32:56.115962 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st_c184c6f2-1af5-4f70-9251-6beb2baae06b/util/0.log" Jan 29 16:32:56 crc kubenswrapper[4714]: I0129 16:32:56.135360 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st_c184c6f2-1af5-4f70-9251-6beb2baae06b/pull/0.log" Jan 29 16:32:56 crc kubenswrapper[4714]: I0129 16:32:56.139517 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st_c184c6f2-1af5-4f70-9251-6beb2baae06b/pull/0.log" Jan 29 16:32:56 crc kubenswrapper[4714]: I0129 16:32:56.331893 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st_c184c6f2-1af5-4f70-9251-6beb2baae06b/extract/0.log" Jan 29 16:32:56 crc kubenswrapper[4714]: I0129 16:32:56.351773 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st_c184c6f2-1af5-4f70-9251-6beb2baae06b/util/0.log" Jan 29 16:32:56 crc kubenswrapper[4714]: I0129 16:32:56.360351 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st_c184c6f2-1af5-4f70-9251-6beb2baae06b/pull/0.log" Jan 29 16:32:56 crc kubenswrapper[4714]: I0129 16:32:56.526353 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-slsxz_16cb244c-6c63-47e6-a312-ba33ab4d4899/extract-utilities/0.log" Jan 29 16:32:56 crc kubenswrapper[4714]: I0129 16:32:56.663542 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-slsxz_16cb244c-6c63-47e6-a312-ba33ab4d4899/extract-content/0.log" Jan 29 16:32:56 crc kubenswrapper[4714]: I0129 16:32:56.714268 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-slsxz_16cb244c-6c63-47e6-a312-ba33ab4d4899/extract-utilities/0.log" Jan 29 16:32:56 crc kubenswrapper[4714]: I0129 16:32:56.726146 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-slsxz_16cb244c-6c63-47e6-a312-ba33ab4d4899/extract-content/0.log" Jan 29 16:32:56 crc kubenswrapper[4714]: I0129 16:32:56.881020 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-slsxz_16cb244c-6c63-47e6-a312-ba33ab4d4899/extract-utilities/0.log" Jan 29 16:32:56 crc kubenswrapper[4714]: I0129 16:32:56.915048 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-slsxz_16cb244c-6c63-47e6-a312-ba33ab4d4899/extract-content/0.log" Jan 29 16:32:57 crc kubenswrapper[4714]: I0129 16:32:57.224467 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-slsxz_16cb244c-6c63-47e6-a312-ba33ab4d4899/registry-server/0.log" Jan 29 16:32:57 crc kubenswrapper[4714]: I0129 16:32:57.322891 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ndx6p_ca655e22-8f97-4e9e-b115-734ae1af7d50/extract-utilities/0.log" Jan 29 16:32:57 crc kubenswrapper[4714]: I0129 16:32:57.481342 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ndx6p_ca655e22-8f97-4e9e-b115-734ae1af7d50/extract-content/0.log" Jan 29 16:32:57 crc kubenswrapper[4714]: I0129 16:32:57.504306 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ndx6p_ca655e22-8f97-4e9e-b115-734ae1af7d50/extract-content/0.log" Jan 29 16:32:57 crc kubenswrapper[4714]: I0129 16:32:57.524618 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ndx6p_ca655e22-8f97-4e9e-b115-734ae1af7d50/extract-utilities/0.log" Jan 29 16:32:57 crc kubenswrapper[4714]: I0129 16:32:57.665873 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ndx6p_ca655e22-8f97-4e9e-b115-734ae1af7d50/extract-utilities/0.log" Jan 29 16:32:57 crc kubenswrapper[4714]: I0129 16:32:57.690782 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ndx6p_ca655e22-8f97-4e9e-b115-734ae1af7d50/extract-content/0.log" Jan 29 16:32:57 crc kubenswrapper[4714]: I0129 16:32:57.888133 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7rvrl_2696757f-83ca-42df-9855-f76adeee02bb/marketplace-operator/0.log" Jan 29 16:32:58 crc kubenswrapper[4714]: I0129 16:32:58.005365 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6gkpz_04dba3a0-a89b-48c5-97ef-e5660d1ae7bb/extract-utilities/0.log" Jan 29 16:32:58 crc kubenswrapper[4714]: I0129 16:32:58.013305 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ndx6p_ca655e22-8f97-4e9e-b115-734ae1af7d50/registry-server/0.log" Jan 29 16:32:58 crc kubenswrapper[4714]: I0129 16:32:58.106072 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6gkpz_04dba3a0-a89b-48c5-97ef-e5660d1ae7bb/extract-content/0.log" Jan 29 16:32:58 crc kubenswrapper[4714]: I0129 16:32:58.139593 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6gkpz_04dba3a0-a89b-48c5-97ef-e5660d1ae7bb/extract-utilities/0.log" Jan 29 16:32:58 crc kubenswrapper[4714]: I0129 16:32:58.168486 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6gkpz_04dba3a0-a89b-48c5-97ef-e5660d1ae7bb/extract-content/0.log" Jan 29 16:32:58 crc kubenswrapper[4714]: E0129 16:32:58.185509 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:32:58 crc kubenswrapper[4714]: I0129 16:32:58.316617 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6gkpz_04dba3a0-a89b-48c5-97ef-e5660d1ae7bb/extract-utilities/0.log" Jan 29 16:32:58 crc kubenswrapper[4714]: I0129 16:32:58.383124 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6gkpz_04dba3a0-a89b-48c5-97ef-e5660d1ae7bb/extract-content/0.log" Jan 29 16:32:58 crc kubenswrapper[4714]: I0129 16:32:58.436783 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6gkpz_04dba3a0-a89b-48c5-97ef-e5660d1ae7bb/registry-server/0.log" Jan 29 16:32:58 crc kubenswrapper[4714]: I0129 16:32:58.484078 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wdwq5_8c12ad14-f878-42a1-a168-bad4026ec2dd/extract-utilities/0.log" Jan 29 16:32:58 crc kubenswrapper[4714]: I0129 16:32:58.624134 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wdwq5_8c12ad14-f878-42a1-a168-bad4026ec2dd/extract-utilities/0.log" Jan 29 16:32:58 crc kubenswrapper[4714]: I0129 16:32:58.860351 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wdwq5_8c12ad14-f878-42a1-a168-bad4026ec2dd/extract-utilities/0.log" Jan 29 16:32:58 crc kubenswrapper[4714]: I0129 16:32:58.981109 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-knxc8_de6c9fbd-8657-4434-bff5-468276791466/extract-utilities/0.log" Jan 29 16:32:59 crc kubenswrapper[4714]: I0129 16:32:59.209178 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-knxc8_de6c9fbd-8657-4434-bff5-468276791466/extract-content/0.log" Jan 29 16:32:59 crc kubenswrapper[4714]: I0129 16:32:59.273106 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-knxc8_de6c9fbd-8657-4434-bff5-468276791466/extract-utilities/0.log" Jan 29 16:32:59 crc kubenswrapper[4714]: I0129 16:32:59.274153 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-knxc8_de6c9fbd-8657-4434-bff5-468276791466/extract-content/0.log" Jan 29 16:32:59 crc kubenswrapper[4714]: I0129 16:32:59.489573 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-knxc8_de6c9fbd-8657-4434-bff5-468276791466/extract-utilities/0.log" Jan 29 16:32:59 crc kubenswrapper[4714]: I0129 16:32:59.507868 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-knxc8_de6c9fbd-8657-4434-bff5-468276791466/extract-content/0.log" Jan 29 16:32:59 crc kubenswrapper[4714]: I0129 16:32:59.666473 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-knxc8_de6c9fbd-8657-4434-bff5-468276791466/registry-server/0.log" Jan 29 16:33:13 crc kubenswrapper[4714]: E0129 16:33:13.187387 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:33:27 crc kubenswrapper[4714]: E0129 16:33:27.186528 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:33:27 crc kubenswrapper[4714]: I0129 16:33:27.844241 4714 patch_prober.go:28] interesting pod/machine-config-daemon-ppngk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:33:27 crc kubenswrapper[4714]: I0129 16:33:27.844322 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:33:41 crc kubenswrapper[4714]: E0129 16:33:41.187982 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:33:52 crc kubenswrapper[4714]: E0129 16:33:52.186843 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:33:55 crc kubenswrapper[4714]: I0129 16:33:55.648525 4714 scope.go:117] "RemoveContainer" containerID="a675f4f90dd437578a32683cafb8b1908c7b80f63189aec46569a29c2add56c0" Jan 29 16:33:55 crc kubenswrapper[4714]: I0129 16:33:55.680598 4714 scope.go:117] "RemoveContainer" containerID="02c31eb5896b8dc80e78bcd830c0e2f150e33491e2a451310581ca2dc793d036" Jan 29 16:33:55 crc kubenswrapper[4714]: I0129 16:33:55.719380 4714 scope.go:117] "RemoveContainer" containerID="57df90298e52fbd874f183f4f349569b890f49a89aba1583825eacf13909d613" Jan 29 16:33:55 crc kubenswrapper[4714]: I0129 16:33:55.758613 4714 scope.go:117] "RemoveContainer" containerID="6cf56f6dac5db6cefc7926b5a24bd8c2963224c5d6e15dd78662ec20f0cf0141" Jan 29 16:33:55 crc kubenswrapper[4714]: I0129 16:33:55.828027 4714 scope.go:117] "RemoveContainer" containerID="6f5e7b1c14376dcb0e62786390737b48cb04d2a05079d8160df0f9dedc56dfc8" Jan 29 16:33:55 crc kubenswrapper[4714]: I0129 16:33:55.846612 4714 scope.go:117] "RemoveContainer" containerID="8f755ca88cec23079c8fcc603a70054716d3829ccb1c6da9ff0f5feff88b5796" Jan 29 16:33:55 crc kubenswrapper[4714]: I0129 16:33:55.873825 4714 scope.go:117] "RemoveContainer" containerID="958fc7292af56ecfa7d5c5a7066233d1295c2b0c82a6e8e4646901914aabf005" Jan 29 16:33:57 crc kubenswrapper[4714]: I0129 16:33:57.844857 4714 patch_prober.go:28] interesting pod/machine-config-daemon-ppngk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:33:57 crc kubenswrapper[4714]: I0129 16:33:57.844987 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:34:06 crc kubenswrapper[4714]: E0129 16:34:06.191537 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:34:11 crc kubenswrapper[4714]: I0129 16:34:11.321241 4714 generic.go:334] "Generic (PLEG): container finished" podID="7bff1b3a-4d70-4c22-ab8c-406d7e147f74" containerID="2547fd88c31e3900e045cc8b73057acb07ad9efeb1d1b5a29b5a911c036ec93d" exitCode=0 Jan 29 16:34:11 crc kubenswrapper[4714]: I0129 16:34:11.321361 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qvwwf/must-gather-x95cf" event={"ID":"7bff1b3a-4d70-4c22-ab8c-406d7e147f74","Type":"ContainerDied","Data":"2547fd88c31e3900e045cc8b73057acb07ad9efeb1d1b5a29b5a911c036ec93d"} Jan 29 16:34:11 crc kubenswrapper[4714]: I0129 16:34:11.322191 4714 scope.go:117] "RemoveContainer" containerID="2547fd88c31e3900e045cc8b73057acb07ad9efeb1d1b5a29b5a911c036ec93d" Jan 29 16:34:11 crc kubenswrapper[4714]: I0129 16:34:11.435579 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qvwwf_must-gather-x95cf_7bff1b3a-4d70-4c22-ab8c-406d7e147f74/gather/0.log" Jan 29 16:34:18 crc kubenswrapper[4714]: I0129 16:34:18.171613 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qvwwf/must-gather-x95cf"] Jan 29 16:34:18 crc kubenswrapper[4714]: I0129 16:34:18.172293 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-qvwwf/must-gather-x95cf" podUID="7bff1b3a-4d70-4c22-ab8c-406d7e147f74" containerName="copy" containerID="cri-o://f6d6162656c64056efbbd383a714625735662747cd4e794e964ea668dbf260b4" gracePeriod=2 Jan 29 16:34:18 crc kubenswrapper[4714]: I0129 16:34:18.180887 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qvwwf/must-gather-x95cf"] Jan 29 16:34:18 crc kubenswrapper[4714]: I0129 16:34:18.396228 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qvwwf_must-gather-x95cf_7bff1b3a-4d70-4c22-ab8c-406d7e147f74/copy/0.log" Jan 29 16:34:18 crc kubenswrapper[4714]: I0129 16:34:18.396579 4714 generic.go:334] "Generic (PLEG): container finished" podID="7bff1b3a-4d70-4c22-ab8c-406d7e147f74" containerID="f6d6162656c64056efbbd383a714625735662747cd4e794e964ea668dbf260b4" exitCode=143 Jan 29 16:34:18 crc kubenswrapper[4714]: I0129 16:34:18.538330 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qvwwf_must-gather-x95cf_7bff1b3a-4d70-4c22-ab8c-406d7e147f74/copy/0.log" Jan 29 16:34:18 crc kubenswrapper[4714]: I0129 16:34:18.543159 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qvwwf/must-gather-x95cf" Jan 29 16:34:18 crc kubenswrapper[4714]: I0129 16:34:18.595557 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm9z8\" (UniqueName: \"kubernetes.io/projected/7bff1b3a-4d70-4c22-ab8c-406d7e147f74-kube-api-access-lm9z8\") pod \"7bff1b3a-4d70-4c22-ab8c-406d7e147f74\" (UID: \"7bff1b3a-4d70-4c22-ab8c-406d7e147f74\") " Jan 29 16:34:18 crc kubenswrapper[4714]: I0129 16:34:18.595708 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7bff1b3a-4d70-4c22-ab8c-406d7e147f74-must-gather-output\") pod \"7bff1b3a-4d70-4c22-ab8c-406d7e147f74\" (UID: \"7bff1b3a-4d70-4c22-ab8c-406d7e147f74\") " Jan 29 16:34:18 crc kubenswrapper[4714]: I0129 16:34:18.605702 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bff1b3a-4d70-4c22-ab8c-406d7e147f74-kube-api-access-lm9z8" (OuterVolumeSpecName: "kube-api-access-lm9z8") pod "7bff1b3a-4d70-4c22-ab8c-406d7e147f74" (UID: "7bff1b3a-4d70-4c22-ab8c-406d7e147f74"). InnerVolumeSpecName "kube-api-access-lm9z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:18 crc kubenswrapper[4714]: I0129 16:34:18.653318 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bff1b3a-4d70-4c22-ab8c-406d7e147f74-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "7bff1b3a-4d70-4c22-ab8c-406d7e147f74" (UID: "7bff1b3a-4d70-4c22-ab8c-406d7e147f74"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:34:18 crc kubenswrapper[4714]: I0129 16:34:18.697459 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm9z8\" (UniqueName: \"kubernetes.io/projected/7bff1b3a-4d70-4c22-ab8c-406d7e147f74-kube-api-access-lm9z8\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:18 crc kubenswrapper[4714]: I0129 16:34:18.697552 4714 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7bff1b3a-4d70-4c22-ab8c-406d7e147f74-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:19 crc kubenswrapper[4714]: I0129 16:34:19.407354 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qvwwf_must-gather-x95cf_7bff1b3a-4d70-4c22-ab8c-406d7e147f74/copy/0.log" Jan 29 16:34:19 crc kubenswrapper[4714]: I0129 16:34:19.408544 4714 scope.go:117] "RemoveContainer" containerID="f6d6162656c64056efbbd383a714625735662747cd4e794e964ea668dbf260b4" Jan 29 16:34:19 crc kubenswrapper[4714]: I0129 16:34:19.408724 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qvwwf/must-gather-x95cf" Jan 29 16:34:19 crc kubenswrapper[4714]: I0129 16:34:19.435396 4714 scope.go:117] "RemoveContainer" containerID="2547fd88c31e3900e045cc8b73057acb07ad9efeb1d1b5a29b5a911c036ec93d" Jan 29 16:34:20 crc kubenswrapper[4714]: I0129 16:34:20.189835 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bff1b3a-4d70-4c22-ab8c-406d7e147f74" path="/var/lib/kubelet/pods/7bff1b3a-4d70-4c22-ab8c-406d7e147f74/volumes" Jan 29 16:34:21 crc kubenswrapper[4714]: E0129 16:34:21.186683 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:34:27 crc kubenswrapper[4714]: I0129 16:34:27.844851 4714 patch_prober.go:28] interesting pod/machine-config-daemon-ppngk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:34:27 crc kubenswrapper[4714]: I0129 16:34:27.845280 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:34:27 crc kubenswrapper[4714]: I0129 16:34:27.845340 4714 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" Jan 29 16:34:27 crc kubenswrapper[4714]: I0129 16:34:27.846170 4714 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"28ae6797628a288c954e7899195453697f32c3fca947d19910c2ccc63b246a5b"} pod="openshift-machine-config-operator/machine-config-daemon-ppngk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:34:27 crc kubenswrapper[4714]: I0129 16:34:27.846267 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" containerID="cri-o://28ae6797628a288c954e7899195453697f32c3fca947d19910c2ccc63b246a5b" gracePeriod=600 Jan 29 16:34:28 crc kubenswrapper[4714]: I0129 16:34:28.472035 4714 generic.go:334] "Generic (PLEG): container finished" podID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerID="28ae6797628a288c954e7899195453697f32c3fca947d19910c2ccc63b246a5b" exitCode=0 Jan 29 16:34:28 crc kubenswrapper[4714]: I0129 16:34:28.472091 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" event={"ID":"c8c765f3-89eb-4077-8829-03e86eb0c90c","Type":"ContainerDied","Data":"28ae6797628a288c954e7899195453697f32c3fca947d19910c2ccc63b246a5b"} Jan 29 16:34:28 crc kubenswrapper[4714]: I0129 16:34:28.472957 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" event={"ID":"c8c765f3-89eb-4077-8829-03e86eb0c90c","Type":"ContainerStarted","Data":"0f064189d4746edc06d2b74e60a2cbff7511efd665171516db63a8272ebb29e1"} Jan 29 16:34:28 crc kubenswrapper[4714]: I0129 16:34:28.473009 4714 scope.go:117] "RemoveContainer" containerID="6d286411f160a5fdbd13efa6bbfae544ec01e44f19ea6b8ff05d4ab9953a5f4a" Jan 29 16:34:33 crc kubenswrapper[4714]: E0129 16:34:33.187303 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:34:46 crc kubenswrapper[4714]: E0129 16:34:46.188270 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:34:55 crc kubenswrapper[4714]: I0129 16:34:55.957727 4714 scope.go:117] "RemoveContainer" containerID="cd8982aadd49edb0050578e4754c053be8b4e593ef390dafa6884ec2cec1fd5d" Jan 29 16:34:55 crc kubenswrapper[4714]: I0129 16:34:55.987271 4714 scope.go:117] "RemoveContainer" containerID="bc9178e686ab88b7e47825dd5faad2c6f1b972c479a40bdd9847878e376e9b8c" Jan 29 16:34:56 crc kubenswrapper[4714]: I0129 16:34:56.025344 4714 scope.go:117] "RemoveContainer" containerID="83eeaf58ca15604fd125219fc4be09b86cbb0308fb89b0438c4ada31625917b5" Jan 29 16:34:58 crc kubenswrapper[4714]: E0129 16:34:58.185861 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:35:09 crc kubenswrapper[4714]: E0129 16:35:09.188200 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:35:24 crc kubenswrapper[4714]: E0129 16:35:24.189908 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:35:37 crc kubenswrapper[4714]: E0129 16:35:37.187318 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:35:49 crc kubenswrapper[4714]: E0129 16:35:49.188756 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:35:56 crc kubenswrapper[4714]: I0129 16:35:56.125002 4714 scope.go:117] "RemoveContainer" containerID="06b2f42073c400aedcdcd2bb5ec2c469776d6fe3bfbe8ce136fb5ad93196a673" Jan 29 16:35:56 crc kubenswrapper[4714]: I0129 16:35:56.160711 4714 scope.go:117] "RemoveContainer" containerID="bbd0c612c943b6ec94ed1610e6f355545ecaf388e165bbb67cb2c301b434e969" Jan 29 16:36:02 crc kubenswrapper[4714]: E0129 16:36:02.186675 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:36:14 crc kubenswrapper[4714]: E0129 16:36:14.190396 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:36:28 crc kubenswrapper[4714]: E0129 16:36:28.187350 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" Jan 29 16:36:41 crc kubenswrapper[4714]: I0129 16:36:41.551579 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hqc4j/must-gather-684p8"] Jan 29 16:36:41 crc kubenswrapper[4714]: E0129 16:36:41.552865 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bff1b3a-4d70-4c22-ab8c-406d7e147f74" containerName="gather" Jan 29 16:36:41 crc kubenswrapper[4714]: I0129 16:36:41.553121 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bff1b3a-4d70-4c22-ab8c-406d7e147f74" containerName="gather" Jan 29 16:36:41 crc kubenswrapper[4714]: E0129 16:36:41.553159 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bff1b3a-4d70-4c22-ab8c-406d7e147f74" containerName="copy" Jan 29 16:36:41 crc kubenswrapper[4714]: I0129 16:36:41.553177 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bff1b3a-4d70-4c22-ab8c-406d7e147f74" containerName="copy" Jan 29 16:36:41 crc kubenswrapper[4714]: I0129 16:36:41.553429 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bff1b3a-4d70-4c22-ab8c-406d7e147f74" containerName="copy" Jan 29 16:36:41 crc kubenswrapper[4714]: I0129 16:36:41.553470 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bff1b3a-4d70-4c22-ab8c-406d7e147f74" containerName="gather" Jan 29 16:36:41 crc kubenswrapper[4714]: I0129 16:36:41.554775 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hqc4j/must-gather-684p8" Jan 29 16:36:41 crc kubenswrapper[4714]: I0129 16:36:41.557511 4714 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-hqc4j"/"default-dockercfg-jhg4d" Jan 29 16:36:41 crc kubenswrapper[4714]: I0129 16:36:41.558119 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hqc4j"/"kube-root-ca.crt" Jan 29 16:36:41 crc kubenswrapper[4714]: I0129 16:36:41.558158 4714 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hqc4j"/"openshift-service-ca.crt" Jan 29 16:36:41 crc kubenswrapper[4714]: I0129 16:36:41.561835 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hqc4j/must-gather-684p8"] Jan 29 16:36:41 crc kubenswrapper[4714]: I0129 16:36:41.708355 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jf26\" (UniqueName: \"kubernetes.io/projected/ff3dd7de-0ec3-4550-ba38-0d67405a2671-kube-api-access-8jf26\") pod \"must-gather-684p8\" (UID: \"ff3dd7de-0ec3-4550-ba38-0d67405a2671\") " pod="openshift-must-gather-hqc4j/must-gather-684p8" Jan 29 16:36:41 crc kubenswrapper[4714]: I0129 16:36:41.708539 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ff3dd7de-0ec3-4550-ba38-0d67405a2671-must-gather-output\") pod \"must-gather-684p8\" (UID: \"ff3dd7de-0ec3-4550-ba38-0d67405a2671\") " pod="openshift-must-gather-hqc4j/must-gather-684p8" Jan 29 16:36:41 crc kubenswrapper[4714]: I0129 16:36:41.809771 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jf26\" (UniqueName: \"kubernetes.io/projected/ff3dd7de-0ec3-4550-ba38-0d67405a2671-kube-api-access-8jf26\") pod \"must-gather-684p8\" (UID: \"ff3dd7de-0ec3-4550-ba38-0d67405a2671\") " pod="openshift-must-gather-hqc4j/must-gather-684p8" Jan 29 16:36:41 crc kubenswrapper[4714]: I0129 16:36:41.809863 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ff3dd7de-0ec3-4550-ba38-0d67405a2671-must-gather-output\") pod \"must-gather-684p8\" (UID: \"ff3dd7de-0ec3-4550-ba38-0d67405a2671\") " pod="openshift-must-gather-hqc4j/must-gather-684p8" Jan 29 16:36:41 crc kubenswrapper[4714]: I0129 16:36:41.810556 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ff3dd7de-0ec3-4550-ba38-0d67405a2671-must-gather-output\") pod \"must-gather-684p8\" (UID: \"ff3dd7de-0ec3-4550-ba38-0d67405a2671\") " pod="openshift-must-gather-hqc4j/must-gather-684p8" Jan 29 16:36:41 crc kubenswrapper[4714]: I0129 16:36:41.828323 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jf26\" (UniqueName: \"kubernetes.io/projected/ff3dd7de-0ec3-4550-ba38-0d67405a2671-kube-api-access-8jf26\") pod \"must-gather-684p8\" (UID: \"ff3dd7de-0ec3-4550-ba38-0d67405a2671\") " pod="openshift-must-gather-hqc4j/must-gather-684p8" Jan 29 16:36:41 crc kubenswrapper[4714]: I0129 16:36:41.876395 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hqc4j/must-gather-684p8" Jan 29 16:36:42 crc kubenswrapper[4714]: I0129 16:36:42.137904 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hqc4j/must-gather-684p8"] Jan 29 16:36:42 crc kubenswrapper[4714]: I0129 16:36:42.476272 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hqc4j/must-gather-684p8" event={"ID":"ff3dd7de-0ec3-4550-ba38-0d67405a2671","Type":"ContainerStarted","Data":"9fd93823c7fddefb604b23f3bb0a407aa212a848b3dc9ccb4d9664567595015f"} Jan 29 16:36:42 crc kubenswrapper[4714]: I0129 16:36:42.476324 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hqc4j/must-gather-684p8" event={"ID":"ff3dd7de-0ec3-4550-ba38-0d67405a2671","Type":"ContainerStarted","Data":"2a5fcee3c64fc545896c1c1e2d77c4828add1a5979b9e718e64ba8cc08bf61fa"} Jan 29 16:36:43 crc kubenswrapper[4714]: I0129 16:36:43.186874 4714 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:36:43 crc kubenswrapper[4714]: I0129 16:36:43.486036 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hqc4j/must-gather-684p8" event={"ID":"ff3dd7de-0ec3-4550-ba38-0d67405a2671","Type":"ContainerStarted","Data":"3660194f8ee015e6e72c5a402b3814eb8d75a532bbd31644dbe43ccd17971ac5"} Jan 29 16:36:43 crc kubenswrapper[4714]: I0129 16:36:43.511122 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hqc4j/must-gather-684p8" podStartSLOduration=2.511101412 podStartE2EDuration="2.511101412s" podCreationTimestamp="2026-01-29 16:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:36:43.510654589 +0000 UTC m=+1610.031155719" watchObservedRunningTime="2026-01-29 16:36:43.511101412 +0000 UTC m=+1610.031602552" Jan 29 16:36:44 crc kubenswrapper[4714]: I0129 16:36:44.494536 4714 generic.go:334] "Generic (PLEG): container finished" podID="8c12ad14-f878-42a1-a168-bad4026ec2dd" containerID="44fbc675db8c185d9a6165ba1fa89d5622594c14059d2b38df5556992739743d" exitCode=0 Jan 29 16:36:44 crc kubenswrapper[4714]: I0129 16:36:44.494591 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wdwq5" event={"ID":"8c12ad14-f878-42a1-a168-bad4026ec2dd","Type":"ContainerDied","Data":"44fbc675db8c185d9a6165ba1fa89d5622594c14059d2b38df5556992739743d"} Jan 29 16:36:45 crc kubenswrapper[4714]: I0129 16:36:45.500697 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wdwq5" event={"ID":"8c12ad14-f878-42a1-a168-bad4026ec2dd","Type":"ContainerStarted","Data":"0acf2b33ba53c938c17f8a546ede46efa1c819b32b82b89cd8a55e99bcbea2e7"} Jan 29 16:36:45 crc kubenswrapper[4714]: I0129 16:36:45.533459 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wdwq5" podStartSLOduration=3.455751704 podStartE2EDuration="10m51.533438037s" podCreationTimestamp="2026-01-29 16:25:54 +0000 UTC" firstStartedPulling="2026-01-29 16:25:56.810455116 +0000 UTC m=+963.330956236" lastFinishedPulling="2026-01-29 16:36:44.888141439 +0000 UTC m=+1611.408642569" observedRunningTime="2026-01-29 16:36:45.529292289 +0000 UTC m=+1612.049793419" watchObservedRunningTime="2026-01-29 16:36:45.533438037 +0000 UTC m=+1612.053939157" Jan 29 16:36:55 crc kubenswrapper[4714]: I0129 16:36:55.274017 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wdwq5" Jan 29 16:36:55 crc kubenswrapper[4714]: I0129 16:36:55.274580 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wdwq5" Jan 29 16:36:55 crc kubenswrapper[4714]: I0129 16:36:55.320363 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wdwq5" Jan 29 16:36:55 crc kubenswrapper[4714]: I0129 16:36:55.632429 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wdwq5" Jan 29 16:36:55 crc kubenswrapper[4714]: I0129 16:36:55.683917 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wdwq5"] Jan 29 16:36:56 crc kubenswrapper[4714]: I0129 16:36:56.226342 4714 scope.go:117] "RemoveContainer" containerID="be7a968b80d5f3fb2bec436bd6006753f8b129fcc60e1b9be5f43a75c59f2e55" Jan 29 16:36:56 crc kubenswrapper[4714]: I0129 16:36:56.299820 4714 scope.go:117] "RemoveContainer" containerID="5aeecda1a40201485f4391ac8cf0a5c17c26ea2f9167f27b15c6c783da6f0f44" Jan 29 16:36:56 crc kubenswrapper[4714]: I0129 16:36:56.323227 4714 scope.go:117] "RemoveContainer" containerID="ad6e6492e17aa0045196d2d7816583e0511c88fa1ba9566c638560f377a604b8" Jan 29 16:36:56 crc kubenswrapper[4714]: I0129 16:36:56.343586 4714 scope.go:117] "RemoveContainer" containerID="b17d360d1529ce324f10ea0628f0cee292edf28a50e5d75cc7f2606e49a8da6e" Jan 29 16:36:56 crc kubenswrapper[4714]: I0129 16:36:56.367977 4714 scope.go:117] "RemoveContainer" containerID="b09b52cf99e966280d15ceb6a6529b45a9303070f260a1e02acc4b1cf0da02c3" Jan 29 16:36:57 crc kubenswrapper[4714]: I0129 16:36:57.575159 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wdwq5" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" containerName="registry-server" containerID="cri-o://0acf2b33ba53c938c17f8a546ede46efa1c819b32b82b89cd8a55e99bcbea2e7" gracePeriod=2 Jan 29 16:36:57 crc kubenswrapper[4714]: I0129 16:36:57.844186 4714 patch_prober.go:28] interesting pod/machine-config-daemon-ppngk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:36:57 crc kubenswrapper[4714]: I0129 16:36:57.844237 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.233007 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wdwq5" Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.336698 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c12ad14-f878-42a1-a168-bad4026ec2dd-utilities\") pod \"8c12ad14-f878-42a1-a168-bad4026ec2dd\" (UID: \"8c12ad14-f878-42a1-a168-bad4026ec2dd\") " Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.336762 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmm88\" (UniqueName: \"kubernetes.io/projected/8c12ad14-f878-42a1-a168-bad4026ec2dd-kube-api-access-tmm88\") pod \"8c12ad14-f878-42a1-a168-bad4026ec2dd\" (UID: \"8c12ad14-f878-42a1-a168-bad4026ec2dd\") " Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.336836 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c12ad14-f878-42a1-a168-bad4026ec2dd-catalog-content\") pod \"8c12ad14-f878-42a1-a168-bad4026ec2dd\" (UID: \"8c12ad14-f878-42a1-a168-bad4026ec2dd\") " Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.341983 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c12ad14-f878-42a1-a168-bad4026ec2dd-utilities" (OuterVolumeSpecName: "utilities") pod "8c12ad14-f878-42a1-a168-bad4026ec2dd" (UID: "8c12ad14-f878-42a1-a168-bad4026ec2dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.352130 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c12ad14-f878-42a1-a168-bad4026ec2dd-kube-api-access-tmm88" (OuterVolumeSpecName: "kube-api-access-tmm88") pod "8c12ad14-f878-42a1-a168-bad4026ec2dd" (UID: "8c12ad14-f878-42a1-a168-bad4026ec2dd"). InnerVolumeSpecName "kube-api-access-tmm88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.363779 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c12ad14-f878-42a1-a168-bad4026ec2dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c12ad14-f878-42a1-a168-bad4026ec2dd" (UID: "8c12ad14-f878-42a1-a168-bad4026ec2dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.438611 4714 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c12ad14-f878-42a1-a168-bad4026ec2dd-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.438650 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmm88\" (UniqueName: \"kubernetes.io/projected/8c12ad14-f878-42a1-a168-bad4026ec2dd-kube-api-access-tmm88\") on node \"crc\" DevicePath \"\"" Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.438663 4714 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c12ad14-f878-42a1-a168-bad4026ec2dd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.592307 4714 generic.go:334] "Generic (PLEG): container finished" podID="8c12ad14-f878-42a1-a168-bad4026ec2dd" containerID="0acf2b33ba53c938c17f8a546ede46efa1c819b32b82b89cd8a55e99bcbea2e7" exitCode=0 Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.592352 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wdwq5" event={"ID":"8c12ad14-f878-42a1-a168-bad4026ec2dd","Type":"ContainerDied","Data":"0acf2b33ba53c938c17f8a546ede46efa1c819b32b82b89cd8a55e99bcbea2e7"} Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.592423 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wdwq5" event={"ID":"8c12ad14-f878-42a1-a168-bad4026ec2dd","Type":"ContainerDied","Data":"e9d23b1dec5222eaf00f0f2fac6279153030320fe12205a5c55c774a975165f4"} Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.592447 4714 scope.go:117] "RemoveContainer" containerID="0acf2b33ba53c938c17f8a546ede46efa1c819b32b82b89cd8a55e99bcbea2e7" Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.592459 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wdwq5" Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.623193 4714 scope.go:117] "RemoveContainer" containerID="44fbc675db8c185d9a6165ba1fa89d5622594c14059d2b38df5556992739743d" Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.634496 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wdwq5"] Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.644674 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wdwq5"] Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.645961 4714 scope.go:117] "RemoveContainer" containerID="efa3dccd677eb8ead685e31efa78ca9b7337f378a73a59452840851ca329f89c" Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.676904 4714 scope.go:117] "RemoveContainer" containerID="0acf2b33ba53c938c17f8a546ede46efa1c819b32b82b89cd8a55e99bcbea2e7" Jan 29 16:36:58 crc kubenswrapper[4714]: E0129 16:36:58.677400 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0acf2b33ba53c938c17f8a546ede46efa1c819b32b82b89cd8a55e99bcbea2e7\": container with ID starting with 0acf2b33ba53c938c17f8a546ede46efa1c819b32b82b89cd8a55e99bcbea2e7 not found: ID does not exist" containerID="0acf2b33ba53c938c17f8a546ede46efa1c819b32b82b89cd8a55e99bcbea2e7" Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.677439 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0acf2b33ba53c938c17f8a546ede46efa1c819b32b82b89cd8a55e99bcbea2e7"} err="failed to get container status \"0acf2b33ba53c938c17f8a546ede46efa1c819b32b82b89cd8a55e99bcbea2e7\": rpc error: code = NotFound desc = could not find container \"0acf2b33ba53c938c17f8a546ede46efa1c819b32b82b89cd8a55e99bcbea2e7\": container with ID starting with 0acf2b33ba53c938c17f8a546ede46efa1c819b32b82b89cd8a55e99bcbea2e7 not found: ID does not exist" Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.677466 4714 scope.go:117] "RemoveContainer" containerID="44fbc675db8c185d9a6165ba1fa89d5622594c14059d2b38df5556992739743d" Jan 29 16:36:58 crc kubenswrapper[4714]: E0129 16:36:58.677865 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44fbc675db8c185d9a6165ba1fa89d5622594c14059d2b38df5556992739743d\": container with ID starting with 44fbc675db8c185d9a6165ba1fa89d5622594c14059d2b38df5556992739743d not found: ID does not exist" containerID="44fbc675db8c185d9a6165ba1fa89d5622594c14059d2b38df5556992739743d" Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.677901 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44fbc675db8c185d9a6165ba1fa89d5622594c14059d2b38df5556992739743d"} err="failed to get container status \"44fbc675db8c185d9a6165ba1fa89d5622594c14059d2b38df5556992739743d\": rpc error: code = NotFound desc = could not find container \"44fbc675db8c185d9a6165ba1fa89d5622594c14059d2b38df5556992739743d\": container with ID starting with 44fbc675db8c185d9a6165ba1fa89d5622594c14059d2b38df5556992739743d not found: ID does not exist" Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.677921 4714 scope.go:117] "RemoveContainer" containerID="efa3dccd677eb8ead685e31efa78ca9b7337f378a73a59452840851ca329f89c" Jan 29 16:36:58 crc kubenswrapper[4714]: E0129 16:36:58.678207 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efa3dccd677eb8ead685e31efa78ca9b7337f378a73a59452840851ca329f89c\": container with ID starting with efa3dccd677eb8ead685e31efa78ca9b7337f378a73a59452840851ca329f89c not found: ID does not exist" containerID="efa3dccd677eb8ead685e31efa78ca9b7337f378a73a59452840851ca329f89c" Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.678234 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efa3dccd677eb8ead685e31efa78ca9b7337f378a73a59452840851ca329f89c"} err="failed to get container status \"efa3dccd677eb8ead685e31efa78ca9b7337f378a73a59452840851ca329f89c\": rpc error: code = NotFound desc = could not find container \"efa3dccd677eb8ead685e31efa78ca9b7337f378a73a59452840851ca329f89c\": container with ID starting with efa3dccd677eb8ead685e31efa78ca9b7337f378a73a59452840851ca329f89c not found: ID does not exist" Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.963479 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2lqb9"] Jan 29 16:36:58 crc kubenswrapper[4714]: E0129 16:36:58.963805 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" containerName="registry-server" Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.963833 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" containerName="registry-server" Jan 29 16:36:58 crc kubenswrapper[4714]: E0129 16:36:58.963858 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" containerName="extract-content" Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.963870 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" containerName="extract-content" Jan 29 16:36:58 crc kubenswrapper[4714]: E0129 16:36:58.963891 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" containerName="extract-utilities" Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.963905 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" containerName="extract-utilities" Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.967177 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" containerName="registry-server" Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.968554 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2lqb9" Jan 29 16:36:58 crc kubenswrapper[4714]: I0129 16:36:58.975566 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2lqb9"] Jan 29 16:36:59 crc kubenswrapper[4714]: I0129 16:36:59.046803 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b6f14e1-18de-4e4f-852e-caadbfd35b71-catalog-content\") pod \"redhat-marketplace-2lqb9\" (UID: \"5b6f14e1-18de-4e4f-852e-caadbfd35b71\") " pod="openshift-marketplace/redhat-marketplace-2lqb9" Jan 29 16:36:59 crc kubenswrapper[4714]: I0129 16:36:59.047374 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b6f14e1-18de-4e4f-852e-caadbfd35b71-utilities\") pod \"redhat-marketplace-2lqb9\" (UID: \"5b6f14e1-18de-4e4f-852e-caadbfd35b71\") " pod="openshift-marketplace/redhat-marketplace-2lqb9" Jan 29 16:36:59 crc kubenswrapper[4714]: I0129 16:36:59.047602 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlgsv\" (UniqueName: \"kubernetes.io/projected/5b6f14e1-18de-4e4f-852e-caadbfd35b71-kube-api-access-dlgsv\") pod \"redhat-marketplace-2lqb9\" (UID: \"5b6f14e1-18de-4e4f-852e-caadbfd35b71\") " pod="openshift-marketplace/redhat-marketplace-2lqb9" Jan 29 16:36:59 crc kubenswrapper[4714]: I0129 16:36:59.148765 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlgsv\" (UniqueName: \"kubernetes.io/projected/5b6f14e1-18de-4e4f-852e-caadbfd35b71-kube-api-access-dlgsv\") pod \"redhat-marketplace-2lqb9\" (UID: \"5b6f14e1-18de-4e4f-852e-caadbfd35b71\") " pod="openshift-marketplace/redhat-marketplace-2lqb9" Jan 29 16:36:59 crc kubenswrapper[4714]: I0129 16:36:59.149036 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b6f14e1-18de-4e4f-852e-caadbfd35b71-catalog-content\") pod \"redhat-marketplace-2lqb9\" (UID: \"5b6f14e1-18de-4e4f-852e-caadbfd35b71\") " pod="openshift-marketplace/redhat-marketplace-2lqb9" Jan 29 16:36:59 crc kubenswrapper[4714]: I0129 16:36:59.149378 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b6f14e1-18de-4e4f-852e-caadbfd35b71-utilities\") pod \"redhat-marketplace-2lqb9\" (UID: \"5b6f14e1-18de-4e4f-852e-caadbfd35b71\") " pod="openshift-marketplace/redhat-marketplace-2lqb9" Jan 29 16:36:59 crc kubenswrapper[4714]: I0129 16:36:59.149845 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b6f14e1-18de-4e4f-852e-caadbfd35b71-utilities\") pod \"redhat-marketplace-2lqb9\" (UID: \"5b6f14e1-18de-4e4f-852e-caadbfd35b71\") " pod="openshift-marketplace/redhat-marketplace-2lqb9" Jan 29 16:36:59 crc kubenswrapper[4714]: I0129 16:36:59.149851 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b6f14e1-18de-4e4f-852e-caadbfd35b71-catalog-content\") pod \"redhat-marketplace-2lqb9\" (UID: \"5b6f14e1-18de-4e4f-852e-caadbfd35b71\") " pod="openshift-marketplace/redhat-marketplace-2lqb9" Jan 29 16:36:59 crc kubenswrapper[4714]: I0129 16:36:59.174833 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlgsv\" (UniqueName: \"kubernetes.io/projected/5b6f14e1-18de-4e4f-852e-caadbfd35b71-kube-api-access-dlgsv\") pod \"redhat-marketplace-2lqb9\" (UID: \"5b6f14e1-18de-4e4f-852e-caadbfd35b71\") " pod="openshift-marketplace/redhat-marketplace-2lqb9" Jan 29 16:36:59 crc kubenswrapper[4714]: I0129 16:36:59.286120 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2lqb9" Jan 29 16:36:59 crc kubenswrapper[4714]: I0129 16:36:59.510541 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2lqb9"] Jan 29 16:36:59 crc kubenswrapper[4714]: W0129 16:36:59.515794 4714 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b6f14e1_18de_4e4f_852e_caadbfd35b71.slice/crio-f02d4ba09eb3f5d2bb88a68856dd6a54f7621db0ebd026df1821a9b0f7c172cf WatchSource:0}: Error finding container f02d4ba09eb3f5d2bb88a68856dd6a54f7621db0ebd026df1821a9b0f7c172cf: Status 404 returned error can't find the container with id f02d4ba09eb3f5d2bb88a68856dd6a54f7621db0ebd026df1821a9b0f7c172cf Jan 29 16:36:59 crc kubenswrapper[4714]: I0129 16:36:59.597283 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2lqb9" event={"ID":"5b6f14e1-18de-4e4f-852e-caadbfd35b71","Type":"ContainerStarted","Data":"f02d4ba09eb3f5d2bb88a68856dd6a54f7621db0ebd026df1821a9b0f7c172cf"} Jan 29 16:37:00 crc kubenswrapper[4714]: I0129 16:37:00.195449 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c12ad14-f878-42a1-a168-bad4026ec2dd" path="/var/lib/kubelet/pods/8c12ad14-f878-42a1-a168-bad4026ec2dd/volumes" Jan 29 16:37:00 crc kubenswrapper[4714]: I0129 16:37:00.610731 4714 generic.go:334] "Generic (PLEG): container finished" podID="5b6f14e1-18de-4e4f-852e-caadbfd35b71" containerID="2f13fd962378375cad6ff392c83e16f142844861e33517b57b33783403f85fe4" exitCode=0 Jan 29 16:37:00 crc kubenswrapper[4714]: I0129 16:37:00.610838 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2lqb9" event={"ID":"5b6f14e1-18de-4e4f-852e-caadbfd35b71","Type":"ContainerDied","Data":"2f13fd962378375cad6ff392c83e16f142844861e33517b57b33783403f85fe4"} Jan 29 16:37:01 crc kubenswrapper[4714]: I0129 16:37:01.620848 4714 generic.go:334] "Generic (PLEG): container finished" podID="5b6f14e1-18de-4e4f-852e-caadbfd35b71" containerID="eff5c7821884beca01fac4daf91eae898a6dd4fe1c53607f72659ececca4c316" exitCode=0 Jan 29 16:37:01 crc kubenswrapper[4714]: I0129 16:37:01.620921 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2lqb9" event={"ID":"5b6f14e1-18de-4e4f-852e-caadbfd35b71","Type":"ContainerDied","Data":"eff5c7821884beca01fac4daf91eae898a6dd4fe1c53607f72659ececca4c316"} Jan 29 16:37:02 crc kubenswrapper[4714]: I0129 16:37:02.629726 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2lqb9" event={"ID":"5b6f14e1-18de-4e4f-852e-caadbfd35b71","Type":"ContainerStarted","Data":"d368dec2ee8c5ee9383ebfac5d4bd159dc5ec20a2bce8b644dccca52e523a8b8"} Jan 29 16:37:07 crc kubenswrapper[4714]: I0129 16:37:07.851511 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2lqb9" podStartSLOduration=8.424355507 podStartE2EDuration="9.85146758s" podCreationTimestamp="2026-01-29 16:36:58 +0000 UTC" firstStartedPulling="2026-01-29 16:37:00.613751411 +0000 UTC m=+1627.134252561" lastFinishedPulling="2026-01-29 16:37:02.040863504 +0000 UTC m=+1628.561364634" observedRunningTime="2026-01-29 16:37:02.663416867 +0000 UTC m=+1629.183917997" watchObservedRunningTime="2026-01-29 16:37:07.85146758 +0000 UTC m=+1634.371968730" Jan 29 16:37:07 crc kubenswrapper[4714]: I0129 16:37:07.854567 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jhx4h"] Jan 29 16:37:07 crc kubenswrapper[4714]: I0129 16:37:07.856854 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jhx4h" Jan 29 16:37:07 crc kubenswrapper[4714]: I0129 16:37:07.860770 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jhx4h"] Jan 29 16:37:07 crc kubenswrapper[4714]: I0129 16:37:07.969885 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f29ce50e-bbd2-4fe5-a032-5163f3f80ca0-utilities\") pod \"community-operators-jhx4h\" (UID: \"f29ce50e-bbd2-4fe5-a032-5163f3f80ca0\") " pod="openshift-marketplace/community-operators-jhx4h" Jan 29 16:37:07 crc kubenswrapper[4714]: I0129 16:37:07.970009 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hctqd\" (UniqueName: \"kubernetes.io/projected/f29ce50e-bbd2-4fe5-a032-5163f3f80ca0-kube-api-access-hctqd\") pod \"community-operators-jhx4h\" (UID: \"f29ce50e-bbd2-4fe5-a032-5163f3f80ca0\") " pod="openshift-marketplace/community-operators-jhx4h" Jan 29 16:37:07 crc kubenswrapper[4714]: I0129 16:37:07.970149 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f29ce50e-bbd2-4fe5-a032-5163f3f80ca0-catalog-content\") pod \"community-operators-jhx4h\" (UID: \"f29ce50e-bbd2-4fe5-a032-5163f3f80ca0\") " pod="openshift-marketplace/community-operators-jhx4h" Jan 29 16:37:08 crc kubenswrapper[4714]: I0129 16:37:08.071855 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f29ce50e-bbd2-4fe5-a032-5163f3f80ca0-catalog-content\") pod \"community-operators-jhx4h\" (UID: \"f29ce50e-bbd2-4fe5-a032-5163f3f80ca0\") " pod="openshift-marketplace/community-operators-jhx4h" Jan 29 16:37:08 crc kubenswrapper[4714]: I0129 16:37:08.071911 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f29ce50e-bbd2-4fe5-a032-5163f3f80ca0-utilities\") pod \"community-operators-jhx4h\" (UID: \"f29ce50e-bbd2-4fe5-a032-5163f3f80ca0\") " pod="openshift-marketplace/community-operators-jhx4h" Jan 29 16:37:08 crc kubenswrapper[4714]: I0129 16:37:08.071970 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hctqd\" (UniqueName: \"kubernetes.io/projected/f29ce50e-bbd2-4fe5-a032-5163f3f80ca0-kube-api-access-hctqd\") pod \"community-operators-jhx4h\" (UID: \"f29ce50e-bbd2-4fe5-a032-5163f3f80ca0\") " pod="openshift-marketplace/community-operators-jhx4h" Jan 29 16:37:08 crc kubenswrapper[4714]: I0129 16:37:08.072430 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f29ce50e-bbd2-4fe5-a032-5163f3f80ca0-catalog-content\") pod \"community-operators-jhx4h\" (UID: \"f29ce50e-bbd2-4fe5-a032-5163f3f80ca0\") " pod="openshift-marketplace/community-operators-jhx4h" Jan 29 16:37:08 crc kubenswrapper[4714]: I0129 16:37:08.072483 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f29ce50e-bbd2-4fe5-a032-5163f3f80ca0-utilities\") pod \"community-operators-jhx4h\" (UID: \"f29ce50e-bbd2-4fe5-a032-5163f3f80ca0\") " pod="openshift-marketplace/community-operators-jhx4h" Jan 29 16:37:08 crc kubenswrapper[4714]: I0129 16:37:08.102556 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hctqd\" (UniqueName: \"kubernetes.io/projected/f29ce50e-bbd2-4fe5-a032-5163f3f80ca0-kube-api-access-hctqd\") pod \"community-operators-jhx4h\" (UID: \"f29ce50e-bbd2-4fe5-a032-5163f3f80ca0\") " pod="openshift-marketplace/community-operators-jhx4h" Jan 29 16:37:08 crc kubenswrapper[4714]: I0129 16:37:08.175180 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jhx4h" Jan 29 16:37:08 crc kubenswrapper[4714]: I0129 16:37:08.464624 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jhx4h"] Jan 29 16:37:08 crc kubenswrapper[4714]: I0129 16:37:08.667737 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhx4h" event={"ID":"f29ce50e-bbd2-4fe5-a032-5163f3f80ca0","Type":"ContainerStarted","Data":"bb661581f20750aeeeb2dcee345a2fe34987995b5a11b5fadc693c52078fdfde"} Jan 29 16:37:09 crc kubenswrapper[4714]: I0129 16:37:09.286814 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2lqb9" Jan 29 16:37:09 crc kubenswrapper[4714]: I0129 16:37:09.286960 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2lqb9" Jan 29 16:37:09 crc kubenswrapper[4714]: I0129 16:37:09.343602 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2lqb9" Jan 29 16:37:09 crc kubenswrapper[4714]: I0129 16:37:09.680969 4714 generic.go:334] "Generic (PLEG): container finished" podID="f29ce50e-bbd2-4fe5-a032-5163f3f80ca0" containerID="12528aa63a645da73775930ec51922ba0e34d726bc9e8bf443beb9cd97abf1ae" exitCode=0 Jan 29 16:37:09 crc kubenswrapper[4714]: I0129 16:37:09.681052 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhx4h" event={"ID":"f29ce50e-bbd2-4fe5-a032-5163f3f80ca0","Type":"ContainerDied","Data":"12528aa63a645da73775930ec51922ba0e34d726bc9e8bf443beb9cd97abf1ae"} Jan 29 16:37:09 crc kubenswrapper[4714]: I0129 16:37:09.729149 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2lqb9" Jan 29 16:37:10 crc kubenswrapper[4714]: I0129 16:37:10.688348 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhx4h" event={"ID":"f29ce50e-bbd2-4fe5-a032-5163f3f80ca0","Type":"ContainerStarted","Data":"70f1c853b2f45447d864ab1780c740cfb8b5e66e1b37ea10c838d033de579edc"} Jan 29 16:37:11 crc kubenswrapper[4714]: I0129 16:37:11.629667 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2lqb9"] Jan 29 16:37:11 crc kubenswrapper[4714]: I0129 16:37:11.693363 4714 generic.go:334] "Generic (PLEG): container finished" podID="f29ce50e-bbd2-4fe5-a032-5163f3f80ca0" containerID="70f1c853b2f45447d864ab1780c740cfb8b5e66e1b37ea10c838d033de579edc" exitCode=0 Jan 29 16:37:11 crc kubenswrapper[4714]: I0129 16:37:11.693421 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhx4h" event={"ID":"f29ce50e-bbd2-4fe5-a032-5163f3f80ca0","Type":"ContainerDied","Data":"70f1c853b2f45447d864ab1780c740cfb8b5e66e1b37ea10c838d033de579edc"} Jan 29 16:37:12 crc kubenswrapper[4714]: I0129 16:37:12.700051 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhx4h" event={"ID":"f29ce50e-bbd2-4fe5-a032-5163f3f80ca0","Type":"ContainerStarted","Data":"d8f87ee4c8ffa275df436d8576ba01192465ef13d6a102cdef3b3eec7691fb41"} Jan 29 16:37:12 crc kubenswrapper[4714]: I0129 16:37:12.700172 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2lqb9" podUID="5b6f14e1-18de-4e4f-852e-caadbfd35b71" containerName="registry-server" containerID="cri-o://d368dec2ee8c5ee9383ebfac5d4bd159dc5ec20a2bce8b644dccca52e523a8b8" gracePeriod=2 Jan 29 16:37:12 crc kubenswrapper[4714]: I0129 16:37:12.724906 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jhx4h" podStartSLOduration=3.299617414 podStartE2EDuration="5.724885234s" podCreationTimestamp="2026-01-29 16:37:07 +0000 UTC" firstStartedPulling="2026-01-29 16:37:09.683130943 +0000 UTC m=+1636.203632073" lastFinishedPulling="2026-01-29 16:37:12.108398773 +0000 UTC m=+1638.628899893" observedRunningTime="2026-01-29 16:37:12.720922341 +0000 UTC m=+1639.241423461" watchObservedRunningTime="2026-01-29 16:37:12.724885234 +0000 UTC m=+1639.245386364" Jan 29 16:37:13 crc kubenswrapper[4714]: I0129 16:37:13.041873 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2lqb9" Jan 29 16:37:13 crc kubenswrapper[4714]: I0129 16:37:13.150233 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b6f14e1-18de-4e4f-852e-caadbfd35b71-utilities\") pod \"5b6f14e1-18de-4e4f-852e-caadbfd35b71\" (UID: \"5b6f14e1-18de-4e4f-852e-caadbfd35b71\") " Jan 29 16:37:13 crc kubenswrapper[4714]: I0129 16:37:13.150277 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlgsv\" (UniqueName: \"kubernetes.io/projected/5b6f14e1-18de-4e4f-852e-caadbfd35b71-kube-api-access-dlgsv\") pod \"5b6f14e1-18de-4e4f-852e-caadbfd35b71\" (UID: \"5b6f14e1-18de-4e4f-852e-caadbfd35b71\") " Jan 29 16:37:13 crc kubenswrapper[4714]: I0129 16:37:13.150329 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b6f14e1-18de-4e4f-852e-caadbfd35b71-catalog-content\") pod \"5b6f14e1-18de-4e4f-852e-caadbfd35b71\" (UID: \"5b6f14e1-18de-4e4f-852e-caadbfd35b71\") " Jan 29 16:37:13 crc kubenswrapper[4714]: I0129 16:37:13.151003 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b6f14e1-18de-4e4f-852e-caadbfd35b71-utilities" (OuterVolumeSpecName: "utilities") pod "5b6f14e1-18de-4e4f-852e-caadbfd35b71" (UID: "5b6f14e1-18de-4e4f-852e-caadbfd35b71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:37:13 crc kubenswrapper[4714]: I0129 16:37:13.164568 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b6f14e1-18de-4e4f-852e-caadbfd35b71-kube-api-access-dlgsv" (OuterVolumeSpecName: "kube-api-access-dlgsv") pod "5b6f14e1-18de-4e4f-852e-caadbfd35b71" (UID: "5b6f14e1-18de-4e4f-852e-caadbfd35b71"). InnerVolumeSpecName "kube-api-access-dlgsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:37:13 crc kubenswrapper[4714]: I0129 16:37:13.181216 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b6f14e1-18de-4e4f-852e-caadbfd35b71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b6f14e1-18de-4e4f-852e-caadbfd35b71" (UID: "5b6f14e1-18de-4e4f-852e-caadbfd35b71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:37:13 crc kubenswrapper[4714]: I0129 16:37:13.251153 4714 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b6f14e1-18de-4e4f-852e-caadbfd35b71-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:37:13 crc kubenswrapper[4714]: I0129 16:37:13.251185 4714 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b6f14e1-18de-4e4f-852e-caadbfd35b71-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:37:13 crc kubenswrapper[4714]: I0129 16:37:13.251194 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlgsv\" (UniqueName: \"kubernetes.io/projected/5b6f14e1-18de-4e4f-852e-caadbfd35b71-kube-api-access-dlgsv\") on node \"crc\" DevicePath \"\"" Jan 29 16:37:13 crc kubenswrapper[4714]: I0129 16:37:13.711287 4714 generic.go:334] "Generic (PLEG): container finished" podID="5b6f14e1-18de-4e4f-852e-caadbfd35b71" containerID="d368dec2ee8c5ee9383ebfac5d4bd159dc5ec20a2bce8b644dccca52e523a8b8" exitCode=0 Jan 29 16:37:13 crc kubenswrapper[4714]: I0129 16:37:13.711316 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2lqb9" event={"ID":"5b6f14e1-18de-4e4f-852e-caadbfd35b71","Type":"ContainerDied","Data":"d368dec2ee8c5ee9383ebfac5d4bd159dc5ec20a2bce8b644dccca52e523a8b8"} Jan 29 16:37:13 crc kubenswrapper[4714]: I0129 16:37:13.711628 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2lqb9" event={"ID":"5b6f14e1-18de-4e4f-852e-caadbfd35b71","Type":"ContainerDied","Data":"f02d4ba09eb3f5d2bb88a68856dd6a54f7621db0ebd026df1821a9b0f7c172cf"} Jan 29 16:37:13 crc kubenswrapper[4714]: I0129 16:37:13.711660 4714 scope.go:117] "RemoveContainer" containerID="d368dec2ee8c5ee9383ebfac5d4bd159dc5ec20a2bce8b644dccca52e523a8b8" Jan 29 16:37:13 crc kubenswrapper[4714]: I0129 16:37:13.711377 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2lqb9" Jan 29 16:37:13 crc kubenswrapper[4714]: I0129 16:37:13.779624 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2lqb9"] Jan 29 16:37:13 crc kubenswrapper[4714]: I0129 16:37:13.800606 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2lqb9"] Jan 29 16:37:13 crc kubenswrapper[4714]: I0129 16:37:13.803981 4714 scope.go:117] "RemoveContainer" containerID="eff5c7821884beca01fac4daf91eae898a6dd4fe1c53607f72659ececca4c316" Jan 29 16:37:13 crc kubenswrapper[4714]: I0129 16:37:13.857264 4714 scope.go:117] "RemoveContainer" containerID="2f13fd962378375cad6ff392c83e16f142844861e33517b57b33783403f85fe4" Jan 29 16:37:13 crc kubenswrapper[4714]: I0129 16:37:13.881743 4714 scope.go:117] "RemoveContainer" containerID="d368dec2ee8c5ee9383ebfac5d4bd159dc5ec20a2bce8b644dccca52e523a8b8" Jan 29 16:37:13 crc kubenswrapper[4714]: E0129 16:37:13.882335 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d368dec2ee8c5ee9383ebfac5d4bd159dc5ec20a2bce8b644dccca52e523a8b8\": container with ID starting with d368dec2ee8c5ee9383ebfac5d4bd159dc5ec20a2bce8b644dccca52e523a8b8 not found: ID does not exist" containerID="d368dec2ee8c5ee9383ebfac5d4bd159dc5ec20a2bce8b644dccca52e523a8b8" Jan 29 16:37:13 crc kubenswrapper[4714]: I0129 16:37:13.882400 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d368dec2ee8c5ee9383ebfac5d4bd159dc5ec20a2bce8b644dccca52e523a8b8"} err="failed to get container status \"d368dec2ee8c5ee9383ebfac5d4bd159dc5ec20a2bce8b644dccca52e523a8b8\": rpc error: code = NotFound desc = could not find container \"d368dec2ee8c5ee9383ebfac5d4bd159dc5ec20a2bce8b644dccca52e523a8b8\": container with ID starting with d368dec2ee8c5ee9383ebfac5d4bd159dc5ec20a2bce8b644dccca52e523a8b8 not found: ID does not exist" Jan 29 16:37:13 crc kubenswrapper[4714]: I0129 16:37:13.882444 4714 scope.go:117] "RemoveContainer" containerID="eff5c7821884beca01fac4daf91eae898a6dd4fe1c53607f72659ececca4c316" Jan 29 16:37:13 crc kubenswrapper[4714]: E0129 16:37:13.882815 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eff5c7821884beca01fac4daf91eae898a6dd4fe1c53607f72659ececca4c316\": container with ID starting with eff5c7821884beca01fac4daf91eae898a6dd4fe1c53607f72659ececca4c316 not found: ID does not exist" containerID="eff5c7821884beca01fac4daf91eae898a6dd4fe1c53607f72659ececca4c316" Jan 29 16:37:13 crc kubenswrapper[4714]: I0129 16:37:13.882860 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eff5c7821884beca01fac4daf91eae898a6dd4fe1c53607f72659ececca4c316"} err="failed to get container status \"eff5c7821884beca01fac4daf91eae898a6dd4fe1c53607f72659ececca4c316\": rpc error: code = NotFound desc = could not find container \"eff5c7821884beca01fac4daf91eae898a6dd4fe1c53607f72659ececca4c316\": container with ID starting with eff5c7821884beca01fac4daf91eae898a6dd4fe1c53607f72659ececca4c316 not found: ID does not exist" Jan 29 16:37:13 crc kubenswrapper[4714]: I0129 16:37:13.882900 4714 scope.go:117] "RemoveContainer" containerID="2f13fd962378375cad6ff392c83e16f142844861e33517b57b33783403f85fe4" Jan 29 16:37:13 crc kubenswrapper[4714]: E0129 16:37:13.883692 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f13fd962378375cad6ff392c83e16f142844861e33517b57b33783403f85fe4\": container with ID starting with 2f13fd962378375cad6ff392c83e16f142844861e33517b57b33783403f85fe4 not found: ID does not exist" containerID="2f13fd962378375cad6ff392c83e16f142844861e33517b57b33783403f85fe4" Jan 29 16:37:13 crc kubenswrapper[4714]: I0129 16:37:13.883752 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f13fd962378375cad6ff392c83e16f142844861e33517b57b33783403f85fe4"} err="failed to get container status \"2f13fd962378375cad6ff392c83e16f142844861e33517b57b33783403f85fe4\": rpc error: code = NotFound desc = could not find container \"2f13fd962378375cad6ff392c83e16f142844861e33517b57b33783403f85fe4\": container with ID starting with 2f13fd962378375cad6ff392c83e16f142844861e33517b57b33783403f85fe4 not found: ID does not exist" Jan 29 16:37:14 crc kubenswrapper[4714]: I0129 16:37:14.196868 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b6f14e1-18de-4e4f-852e-caadbfd35b71" path="/var/lib/kubelet/pods/5b6f14e1-18de-4e4f-852e-caadbfd35b71/volumes" Jan 29 16:37:17 crc kubenswrapper[4714]: I0129 16:37:17.029784 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qdbvx"] Jan 29 16:37:17 crc kubenswrapper[4714]: E0129 16:37:17.030385 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6f14e1-18de-4e4f-852e-caadbfd35b71" containerName="registry-server" Jan 29 16:37:17 crc kubenswrapper[4714]: I0129 16:37:17.030401 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6f14e1-18de-4e4f-852e-caadbfd35b71" containerName="registry-server" Jan 29 16:37:17 crc kubenswrapper[4714]: E0129 16:37:17.030417 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6f14e1-18de-4e4f-852e-caadbfd35b71" containerName="extract-utilities" Jan 29 16:37:17 crc kubenswrapper[4714]: I0129 16:37:17.030425 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6f14e1-18de-4e4f-852e-caadbfd35b71" containerName="extract-utilities" Jan 29 16:37:17 crc kubenswrapper[4714]: E0129 16:37:17.030446 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6f14e1-18de-4e4f-852e-caadbfd35b71" containerName="extract-content" Jan 29 16:37:17 crc kubenswrapper[4714]: I0129 16:37:17.030454 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6f14e1-18de-4e4f-852e-caadbfd35b71" containerName="extract-content" Jan 29 16:37:17 crc kubenswrapper[4714]: I0129 16:37:17.030589 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b6f14e1-18de-4e4f-852e-caadbfd35b71" containerName="registry-server" Jan 29 16:37:17 crc kubenswrapper[4714]: I0129 16:37:17.031650 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdbvx" Jan 29 16:37:17 crc kubenswrapper[4714]: I0129 16:37:17.053398 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qdbvx"] Jan 29 16:37:17 crc kubenswrapper[4714]: I0129 16:37:17.100476 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e6b3552-9d7a-459f-a4c4-67da40c67c15-catalog-content\") pod \"certified-operators-qdbvx\" (UID: \"4e6b3552-9d7a-459f-a4c4-67da40c67c15\") " pod="openshift-marketplace/certified-operators-qdbvx" Jan 29 16:37:17 crc kubenswrapper[4714]: I0129 16:37:17.100524 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e6b3552-9d7a-459f-a4c4-67da40c67c15-utilities\") pod \"certified-operators-qdbvx\" (UID: \"4e6b3552-9d7a-459f-a4c4-67da40c67c15\") " pod="openshift-marketplace/certified-operators-qdbvx" Jan 29 16:37:17 crc kubenswrapper[4714]: I0129 16:37:17.100558 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7b4g\" (UniqueName: \"kubernetes.io/projected/4e6b3552-9d7a-459f-a4c4-67da40c67c15-kube-api-access-j7b4g\") pod \"certified-operators-qdbvx\" (UID: \"4e6b3552-9d7a-459f-a4c4-67da40c67c15\") " pod="openshift-marketplace/certified-operators-qdbvx" Jan 29 16:37:17 crc kubenswrapper[4714]: I0129 16:37:17.202462 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7b4g\" (UniqueName: \"kubernetes.io/projected/4e6b3552-9d7a-459f-a4c4-67da40c67c15-kube-api-access-j7b4g\") pod \"certified-operators-qdbvx\" (UID: \"4e6b3552-9d7a-459f-a4c4-67da40c67c15\") " pod="openshift-marketplace/certified-operators-qdbvx" Jan 29 16:37:17 crc kubenswrapper[4714]: I0129 16:37:17.202652 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e6b3552-9d7a-459f-a4c4-67da40c67c15-catalog-content\") pod \"certified-operators-qdbvx\" (UID: \"4e6b3552-9d7a-459f-a4c4-67da40c67c15\") " pod="openshift-marketplace/certified-operators-qdbvx" Jan 29 16:37:17 crc kubenswrapper[4714]: I0129 16:37:17.202693 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e6b3552-9d7a-459f-a4c4-67da40c67c15-utilities\") pod \"certified-operators-qdbvx\" (UID: \"4e6b3552-9d7a-459f-a4c4-67da40c67c15\") " pod="openshift-marketplace/certified-operators-qdbvx" Jan 29 16:37:17 crc kubenswrapper[4714]: I0129 16:37:17.203246 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e6b3552-9d7a-459f-a4c4-67da40c67c15-utilities\") pod \"certified-operators-qdbvx\" (UID: \"4e6b3552-9d7a-459f-a4c4-67da40c67c15\") " pod="openshift-marketplace/certified-operators-qdbvx" Jan 29 16:37:17 crc kubenswrapper[4714]: I0129 16:37:17.203302 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e6b3552-9d7a-459f-a4c4-67da40c67c15-catalog-content\") pod \"certified-operators-qdbvx\" (UID: \"4e6b3552-9d7a-459f-a4c4-67da40c67c15\") " pod="openshift-marketplace/certified-operators-qdbvx" Jan 29 16:37:17 crc kubenswrapper[4714]: I0129 16:37:17.251013 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7b4g\" (UniqueName: \"kubernetes.io/projected/4e6b3552-9d7a-459f-a4c4-67da40c67c15-kube-api-access-j7b4g\") pod \"certified-operators-qdbvx\" (UID: \"4e6b3552-9d7a-459f-a4c4-67da40c67c15\") " pod="openshift-marketplace/certified-operators-qdbvx" Jan 29 16:37:17 crc kubenswrapper[4714]: I0129 16:37:17.355510 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdbvx" Jan 29 16:37:17 crc kubenswrapper[4714]: I0129 16:37:17.586781 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qdbvx"] Jan 29 16:37:17 crc kubenswrapper[4714]: I0129 16:37:17.737553 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdbvx" event={"ID":"4e6b3552-9d7a-459f-a4c4-67da40c67c15","Type":"ContainerStarted","Data":"7cbae5276fb8d757c39c5f0cacfa9727ddbf2eb3306d3df09a254f9f755544f8"} Jan 29 16:37:18 crc kubenswrapper[4714]: I0129 16:37:18.175767 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jhx4h" Jan 29 16:37:18 crc kubenswrapper[4714]: I0129 16:37:18.175856 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jhx4h" Jan 29 16:37:18 crc kubenswrapper[4714]: I0129 16:37:18.231831 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jhx4h" Jan 29 16:37:18 crc kubenswrapper[4714]: I0129 16:37:18.751156 4714 generic.go:334] "Generic (PLEG): container finished" podID="4e6b3552-9d7a-459f-a4c4-67da40c67c15" containerID="654e4e9ffb535db9fca5c14eda678189ef4ed63a10878793979a5fe56e16938d" exitCode=0 Jan 29 16:37:18 crc kubenswrapper[4714]: I0129 16:37:18.751244 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdbvx" event={"ID":"4e6b3552-9d7a-459f-a4c4-67da40c67c15","Type":"ContainerDied","Data":"654e4e9ffb535db9fca5c14eda678189ef4ed63a10878793979a5fe56e16938d"} Jan 29 16:37:18 crc kubenswrapper[4714]: I0129 16:37:18.808290 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jhx4h" Jan 29 16:37:20 crc kubenswrapper[4714]: I0129 16:37:20.762304 4714 generic.go:334] "Generic (PLEG): container finished" podID="4e6b3552-9d7a-459f-a4c4-67da40c67c15" containerID="cd702a907a78fab939fb238809b451a3868bf29847022a3fd0a1279c986cdaca" exitCode=0 Jan 29 16:37:20 crc kubenswrapper[4714]: I0129 16:37:20.762348 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdbvx" event={"ID":"4e6b3552-9d7a-459f-a4c4-67da40c67c15","Type":"ContainerDied","Data":"cd702a907a78fab939fb238809b451a3868bf29847022a3fd0a1279c986cdaca"} Jan 29 16:37:21 crc kubenswrapper[4714]: I0129 16:37:21.423956 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jhx4h"] Jan 29 16:37:21 crc kubenswrapper[4714]: I0129 16:37:21.424214 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jhx4h" podUID="f29ce50e-bbd2-4fe5-a032-5163f3f80ca0" containerName="registry-server" containerID="cri-o://d8f87ee4c8ffa275df436d8576ba01192465ef13d6a102cdef3b3eec7691fb41" gracePeriod=2 Jan 29 16:37:21 crc kubenswrapper[4714]: I0129 16:37:21.769557 4714 generic.go:334] "Generic (PLEG): container finished" podID="f29ce50e-bbd2-4fe5-a032-5163f3f80ca0" containerID="d8f87ee4c8ffa275df436d8576ba01192465ef13d6a102cdef3b3eec7691fb41" exitCode=0 Jan 29 16:37:21 crc kubenswrapper[4714]: I0129 16:37:21.769775 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhx4h" event={"ID":"f29ce50e-bbd2-4fe5-a032-5163f3f80ca0","Type":"ContainerDied","Data":"d8f87ee4c8ffa275df436d8576ba01192465ef13d6a102cdef3b3eec7691fb41"} Jan 29 16:37:21 crc kubenswrapper[4714]: I0129 16:37:21.772653 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdbvx" event={"ID":"4e6b3552-9d7a-459f-a4c4-67da40c67c15","Type":"ContainerStarted","Data":"9632c9e5f7fc6621d6e14cceb0240050190e925aa5472c826d55f03bda4cac18"} Jan 29 16:37:21 crc kubenswrapper[4714]: I0129 16:37:21.797759 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qdbvx" podStartSLOduration=2.27724702 podStartE2EDuration="4.797742406s" podCreationTimestamp="2026-01-29 16:37:17 +0000 UTC" firstStartedPulling="2026-01-29 16:37:18.753113572 +0000 UTC m=+1645.273614732" lastFinishedPulling="2026-01-29 16:37:21.273608998 +0000 UTC m=+1647.794110118" observedRunningTime="2026-01-29 16:37:21.795574334 +0000 UTC m=+1648.316075464" watchObservedRunningTime="2026-01-29 16:37:21.797742406 +0000 UTC m=+1648.318243526" Jan 29 16:37:22 crc kubenswrapper[4714]: I0129 16:37:22.065997 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jhx4h" Jan 29 16:37:22 crc kubenswrapper[4714]: I0129 16:37:22.164919 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f29ce50e-bbd2-4fe5-a032-5163f3f80ca0-utilities\") pod \"f29ce50e-bbd2-4fe5-a032-5163f3f80ca0\" (UID: \"f29ce50e-bbd2-4fe5-a032-5163f3f80ca0\") " Jan 29 16:37:22 crc kubenswrapper[4714]: I0129 16:37:22.165046 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hctqd\" (UniqueName: \"kubernetes.io/projected/f29ce50e-bbd2-4fe5-a032-5163f3f80ca0-kube-api-access-hctqd\") pod \"f29ce50e-bbd2-4fe5-a032-5163f3f80ca0\" (UID: \"f29ce50e-bbd2-4fe5-a032-5163f3f80ca0\") " Jan 29 16:37:22 crc kubenswrapper[4714]: I0129 16:37:22.165121 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f29ce50e-bbd2-4fe5-a032-5163f3f80ca0-catalog-content\") pod \"f29ce50e-bbd2-4fe5-a032-5163f3f80ca0\" (UID: \"f29ce50e-bbd2-4fe5-a032-5163f3f80ca0\") " Jan 29 16:37:22 crc kubenswrapper[4714]: I0129 16:37:22.165578 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f29ce50e-bbd2-4fe5-a032-5163f3f80ca0-utilities" (OuterVolumeSpecName: "utilities") pod "f29ce50e-bbd2-4fe5-a032-5163f3f80ca0" (UID: "f29ce50e-bbd2-4fe5-a032-5163f3f80ca0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:37:22 crc kubenswrapper[4714]: I0129 16:37:22.188240 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f29ce50e-bbd2-4fe5-a032-5163f3f80ca0-kube-api-access-hctqd" (OuterVolumeSpecName: "kube-api-access-hctqd") pod "f29ce50e-bbd2-4fe5-a032-5163f3f80ca0" (UID: "f29ce50e-bbd2-4fe5-a032-5163f3f80ca0"). InnerVolumeSpecName "kube-api-access-hctqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:37:22 crc kubenswrapper[4714]: I0129 16:37:22.267638 4714 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f29ce50e-bbd2-4fe5-a032-5163f3f80ca0-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:37:22 crc kubenswrapper[4714]: I0129 16:37:22.267699 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hctqd\" (UniqueName: \"kubernetes.io/projected/f29ce50e-bbd2-4fe5-a032-5163f3f80ca0-kube-api-access-hctqd\") on node \"crc\" DevicePath \"\"" Jan 29 16:37:22 crc kubenswrapper[4714]: I0129 16:37:22.689117 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f29ce50e-bbd2-4fe5-a032-5163f3f80ca0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f29ce50e-bbd2-4fe5-a032-5163f3f80ca0" (UID: "f29ce50e-bbd2-4fe5-a032-5163f3f80ca0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:37:22 crc kubenswrapper[4714]: I0129 16:37:22.774122 4714 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f29ce50e-bbd2-4fe5-a032-5163f3f80ca0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:37:22 crc kubenswrapper[4714]: I0129 16:37:22.780200 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jhx4h" Jan 29 16:37:22 crc kubenswrapper[4714]: I0129 16:37:22.780189 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhx4h" event={"ID":"f29ce50e-bbd2-4fe5-a032-5163f3f80ca0","Type":"ContainerDied","Data":"bb661581f20750aeeeb2dcee345a2fe34987995b5a11b5fadc693c52078fdfde"} Jan 29 16:37:22 crc kubenswrapper[4714]: I0129 16:37:22.780272 4714 scope.go:117] "RemoveContainer" containerID="d8f87ee4c8ffa275df436d8576ba01192465ef13d6a102cdef3b3eec7691fb41" Jan 29 16:37:22 crc kubenswrapper[4714]: I0129 16:37:22.800313 4714 scope.go:117] "RemoveContainer" containerID="70f1c853b2f45447d864ab1780c740cfb8b5e66e1b37ea10c838d033de579edc" Jan 29 16:37:22 crc kubenswrapper[4714]: I0129 16:37:22.813514 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jhx4h"] Jan 29 16:37:22 crc kubenswrapper[4714]: I0129 16:37:22.818803 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jhx4h"] Jan 29 16:37:22 crc kubenswrapper[4714]: I0129 16:37:22.829641 4714 scope.go:117] "RemoveContainer" containerID="12528aa63a645da73775930ec51922ba0e34d726bc9e8bf443beb9cd97abf1ae" Jan 29 16:37:24 crc kubenswrapper[4714]: I0129 16:37:24.195327 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f29ce50e-bbd2-4fe5-a032-5163f3f80ca0" path="/var/lib/kubelet/pods/f29ce50e-bbd2-4fe5-a032-5163f3f80ca0/volumes" Jan 29 16:37:27 crc kubenswrapper[4714]: I0129 16:37:27.356303 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qdbvx" Jan 29 16:37:27 crc kubenswrapper[4714]: I0129 16:37:27.358501 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qdbvx" Jan 29 16:37:27 crc kubenswrapper[4714]: I0129 16:37:27.408619 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qdbvx" Jan 29 16:37:27 crc kubenswrapper[4714]: I0129 16:37:27.844781 4714 patch_prober.go:28] interesting pod/machine-config-daemon-ppngk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:37:27 crc kubenswrapper[4714]: I0129 16:37:27.845164 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:37:27 crc kubenswrapper[4714]: I0129 16:37:27.869765 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qdbvx" Jan 29 16:37:27 crc kubenswrapper[4714]: I0129 16:37:27.918533 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qdbvx"] Jan 29 16:37:29 crc kubenswrapper[4714]: I0129 16:37:29.822553 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qdbvx" podUID="4e6b3552-9d7a-459f-a4c4-67da40c67c15" containerName="registry-server" containerID="cri-o://9632c9e5f7fc6621d6e14cceb0240050190e925aa5472c826d55f03bda4cac18" gracePeriod=2 Jan 29 16:37:30 crc kubenswrapper[4714]: I0129 16:37:30.335790 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdbvx" Jan 29 16:37:30 crc kubenswrapper[4714]: I0129 16:37:30.391962 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7b4g\" (UniqueName: \"kubernetes.io/projected/4e6b3552-9d7a-459f-a4c4-67da40c67c15-kube-api-access-j7b4g\") pod \"4e6b3552-9d7a-459f-a4c4-67da40c67c15\" (UID: \"4e6b3552-9d7a-459f-a4c4-67da40c67c15\") " Jan 29 16:37:30 crc kubenswrapper[4714]: I0129 16:37:30.392044 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e6b3552-9d7a-459f-a4c4-67da40c67c15-utilities\") pod \"4e6b3552-9d7a-459f-a4c4-67da40c67c15\" (UID: \"4e6b3552-9d7a-459f-a4c4-67da40c67c15\") " Jan 29 16:37:30 crc kubenswrapper[4714]: I0129 16:37:30.392111 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e6b3552-9d7a-459f-a4c4-67da40c67c15-catalog-content\") pod \"4e6b3552-9d7a-459f-a4c4-67da40c67c15\" (UID: \"4e6b3552-9d7a-459f-a4c4-67da40c67c15\") " Jan 29 16:37:30 crc kubenswrapper[4714]: I0129 16:37:30.393040 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e6b3552-9d7a-459f-a4c4-67da40c67c15-utilities" (OuterVolumeSpecName: "utilities") pod "4e6b3552-9d7a-459f-a4c4-67da40c67c15" (UID: "4e6b3552-9d7a-459f-a4c4-67da40c67c15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:37:30 crc kubenswrapper[4714]: I0129 16:37:30.397168 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e6b3552-9d7a-459f-a4c4-67da40c67c15-kube-api-access-j7b4g" (OuterVolumeSpecName: "kube-api-access-j7b4g") pod "4e6b3552-9d7a-459f-a4c4-67da40c67c15" (UID: "4e6b3552-9d7a-459f-a4c4-67da40c67c15"). InnerVolumeSpecName "kube-api-access-j7b4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:37:30 crc kubenswrapper[4714]: I0129 16:37:30.445465 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e6b3552-9d7a-459f-a4c4-67da40c67c15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e6b3552-9d7a-459f-a4c4-67da40c67c15" (UID: "4e6b3552-9d7a-459f-a4c4-67da40c67c15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:37:30 crc kubenswrapper[4714]: I0129 16:37:30.493529 4714 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e6b3552-9d7a-459f-a4c4-67da40c67c15-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:37:30 crc kubenswrapper[4714]: I0129 16:37:30.493565 4714 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e6b3552-9d7a-459f-a4c4-67da40c67c15-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:37:30 crc kubenswrapper[4714]: I0129 16:37:30.493580 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7b4g\" (UniqueName: \"kubernetes.io/projected/4e6b3552-9d7a-459f-a4c4-67da40c67c15-kube-api-access-j7b4g\") on node \"crc\" DevicePath \"\"" Jan 29 16:37:30 crc kubenswrapper[4714]: I0129 16:37:30.829895 4714 generic.go:334] "Generic (PLEG): container finished" podID="4e6b3552-9d7a-459f-a4c4-67da40c67c15" containerID="9632c9e5f7fc6621d6e14cceb0240050190e925aa5472c826d55f03bda4cac18" exitCode=0 Jan 29 16:37:30 crc kubenswrapper[4714]: I0129 16:37:30.829952 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdbvx" event={"ID":"4e6b3552-9d7a-459f-a4c4-67da40c67c15","Type":"ContainerDied","Data":"9632c9e5f7fc6621d6e14cceb0240050190e925aa5472c826d55f03bda4cac18"} Jan 29 16:37:30 crc kubenswrapper[4714]: I0129 16:37:30.829983 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdbvx" event={"ID":"4e6b3552-9d7a-459f-a4c4-67da40c67c15","Type":"ContainerDied","Data":"7cbae5276fb8d757c39c5f0cacfa9727ddbf2eb3306d3df09a254f9f755544f8"} Jan 29 16:37:30 crc kubenswrapper[4714]: I0129 16:37:30.830000 4714 scope.go:117] "RemoveContainer" containerID="9632c9e5f7fc6621d6e14cceb0240050190e925aa5472c826d55f03bda4cac18" Jan 29 16:37:30 crc kubenswrapper[4714]: I0129 16:37:30.830059 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdbvx" Jan 29 16:37:30 crc kubenswrapper[4714]: I0129 16:37:30.846306 4714 scope.go:117] "RemoveContainer" containerID="cd702a907a78fab939fb238809b451a3868bf29847022a3fd0a1279c986cdaca" Jan 29 16:37:30 crc kubenswrapper[4714]: I0129 16:37:30.858616 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qdbvx"] Jan 29 16:37:30 crc kubenswrapper[4714]: I0129 16:37:30.861959 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qdbvx"] Jan 29 16:37:30 crc kubenswrapper[4714]: I0129 16:37:30.872639 4714 scope.go:117] "RemoveContainer" containerID="654e4e9ffb535db9fca5c14eda678189ef4ed63a10878793979a5fe56e16938d" Jan 29 16:37:30 crc kubenswrapper[4714]: I0129 16:37:30.886710 4714 scope.go:117] "RemoveContainer" containerID="9632c9e5f7fc6621d6e14cceb0240050190e925aa5472c826d55f03bda4cac18" Jan 29 16:37:30 crc kubenswrapper[4714]: E0129 16:37:30.887198 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9632c9e5f7fc6621d6e14cceb0240050190e925aa5472c826d55f03bda4cac18\": container with ID starting with 9632c9e5f7fc6621d6e14cceb0240050190e925aa5472c826d55f03bda4cac18 not found: ID does not exist" containerID="9632c9e5f7fc6621d6e14cceb0240050190e925aa5472c826d55f03bda4cac18" Jan 29 16:37:30 crc kubenswrapper[4714]: I0129 16:37:30.887237 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9632c9e5f7fc6621d6e14cceb0240050190e925aa5472c826d55f03bda4cac18"} err="failed to get container status \"9632c9e5f7fc6621d6e14cceb0240050190e925aa5472c826d55f03bda4cac18\": rpc error: code = NotFound desc = could not find container \"9632c9e5f7fc6621d6e14cceb0240050190e925aa5472c826d55f03bda4cac18\": container with ID starting with 9632c9e5f7fc6621d6e14cceb0240050190e925aa5472c826d55f03bda4cac18 not found: ID does not exist" Jan 29 16:37:30 crc kubenswrapper[4714]: I0129 16:37:30.887266 4714 scope.go:117] "RemoveContainer" containerID="cd702a907a78fab939fb238809b451a3868bf29847022a3fd0a1279c986cdaca" Jan 29 16:37:30 crc kubenswrapper[4714]: E0129 16:37:30.887514 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd702a907a78fab939fb238809b451a3868bf29847022a3fd0a1279c986cdaca\": container with ID starting with cd702a907a78fab939fb238809b451a3868bf29847022a3fd0a1279c986cdaca not found: ID does not exist" containerID="cd702a907a78fab939fb238809b451a3868bf29847022a3fd0a1279c986cdaca" Jan 29 16:37:30 crc kubenswrapper[4714]: I0129 16:37:30.887572 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd702a907a78fab939fb238809b451a3868bf29847022a3fd0a1279c986cdaca"} err="failed to get container status \"cd702a907a78fab939fb238809b451a3868bf29847022a3fd0a1279c986cdaca\": rpc error: code = NotFound desc = could not find container \"cd702a907a78fab939fb238809b451a3868bf29847022a3fd0a1279c986cdaca\": container with ID starting with cd702a907a78fab939fb238809b451a3868bf29847022a3fd0a1279c986cdaca not found: ID does not exist" Jan 29 16:37:30 crc kubenswrapper[4714]: I0129 16:37:30.887615 4714 scope.go:117] "RemoveContainer" containerID="654e4e9ffb535db9fca5c14eda678189ef4ed63a10878793979a5fe56e16938d" Jan 29 16:37:30 crc kubenswrapper[4714]: E0129 16:37:30.887950 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"654e4e9ffb535db9fca5c14eda678189ef4ed63a10878793979a5fe56e16938d\": container with ID starting with 654e4e9ffb535db9fca5c14eda678189ef4ed63a10878793979a5fe56e16938d not found: ID does not exist" containerID="654e4e9ffb535db9fca5c14eda678189ef4ed63a10878793979a5fe56e16938d" Jan 29 16:37:30 crc kubenswrapper[4714]: I0129 16:37:30.887973 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"654e4e9ffb535db9fca5c14eda678189ef4ed63a10878793979a5fe56e16938d"} err="failed to get container status \"654e4e9ffb535db9fca5c14eda678189ef4ed63a10878793979a5fe56e16938d\": rpc error: code = NotFound desc = could not find container \"654e4e9ffb535db9fca5c14eda678189ef4ed63a10878793979a5fe56e16938d\": container with ID starting with 654e4e9ffb535db9fca5c14eda678189ef4ed63a10878793979a5fe56e16938d not found: ID does not exist" Jan 29 16:37:32 crc kubenswrapper[4714]: I0129 16:37:32.193166 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e6b3552-9d7a-459f-a4c4-67da40c67c15" path="/var/lib/kubelet/pods/4e6b3552-9d7a-459f-a4c4-67da40c67c15/volumes" Jan 29 16:37:33 crc kubenswrapper[4714]: I0129 16:37:33.953012 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-sq9mx_8062d225-aa57-48df-bf28-2254ecc4f635/control-plane-machine-set-operator/0.log" Jan 29 16:37:34 crc kubenswrapper[4714]: I0129 16:37:34.095352 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-z4h55_bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92/kube-rbac-proxy/0.log" Jan 29 16:37:34 crc kubenswrapper[4714]: I0129 16:37:34.122247 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-z4h55_bfb0bd22-cbd8-4ce8-a4f6-86a16dcdeb92/machine-api-operator/0.log" Jan 29 16:37:57 crc kubenswrapper[4714]: I0129 16:37:57.844835 4714 patch_prober.go:28] interesting pod/machine-config-daemon-ppngk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:37:57 crc kubenswrapper[4714]: I0129 16:37:57.845432 4714 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:37:57 crc kubenswrapper[4714]: I0129 16:37:57.845480 4714 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" Jan 29 16:37:57 crc kubenswrapper[4714]: I0129 16:37:57.846224 4714 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f064189d4746edc06d2b74e60a2cbff7511efd665171516db63a8272ebb29e1"} pod="openshift-machine-config-operator/machine-config-daemon-ppngk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:37:57 crc kubenswrapper[4714]: I0129 16:37:57.846290 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerName="machine-config-daemon" containerID="cri-o://0f064189d4746edc06d2b74e60a2cbff7511efd665171516db63a8272ebb29e1" gracePeriod=600 Jan 29 16:37:58 crc kubenswrapper[4714]: E0129 16:37:58.756546 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppngk_openshift-machine-config-operator(c8c765f3-89eb-4077-8829-03e86eb0c90c)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" Jan 29 16:37:59 crc kubenswrapper[4714]: I0129 16:37:59.001228 4714 generic.go:334] "Generic (PLEG): container finished" podID="c8c765f3-89eb-4077-8829-03e86eb0c90c" containerID="0f064189d4746edc06d2b74e60a2cbff7511efd665171516db63a8272ebb29e1" exitCode=0 Jan 29 16:37:59 crc kubenswrapper[4714]: I0129 16:37:59.001279 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" event={"ID":"c8c765f3-89eb-4077-8829-03e86eb0c90c","Type":"ContainerDied","Data":"0f064189d4746edc06d2b74e60a2cbff7511efd665171516db63a8272ebb29e1"} Jan 29 16:37:59 crc kubenswrapper[4714]: I0129 16:37:59.001330 4714 scope.go:117] "RemoveContainer" containerID="28ae6797628a288c954e7899195453697f32c3fca947d19910c2ccc63b246a5b" Jan 29 16:37:59 crc kubenswrapper[4714]: I0129 16:37:59.001957 4714 scope.go:117] "RemoveContainer" containerID="0f064189d4746edc06d2b74e60a2cbff7511efd665171516db63a8272ebb29e1" Jan 29 16:37:59 crc kubenswrapper[4714]: E0129 16:37:59.002281 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppngk_openshift-machine-config-operator(c8c765f3-89eb-4077-8829-03e86eb0c90c)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" Jan 29 16:38:01 crc kubenswrapper[4714]: I0129 16:38:01.193668 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-m26zh_78b34628-144f-416a-b493-15ba445caa48/kube-rbac-proxy/0.log" Jan 29 16:38:01 crc kubenswrapper[4714]: I0129 16:38:01.202433 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-m26zh_78b34628-144f-416a-b493-15ba445caa48/controller/0.log" Jan 29 16:38:01 crc kubenswrapper[4714]: I0129 16:38:01.372732 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/cp-frr-files/0.log" Jan 29 16:38:01 crc kubenswrapper[4714]: I0129 16:38:01.548182 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/cp-reloader/0.log" Jan 29 16:38:01 crc kubenswrapper[4714]: I0129 16:38:01.557141 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/cp-frr-files/0.log" Jan 29 16:38:01 crc kubenswrapper[4714]: I0129 16:38:01.624635 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/cp-metrics/0.log" Jan 29 16:38:01 crc kubenswrapper[4714]: I0129 16:38:01.647768 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/cp-reloader/0.log" Jan 29 16:38:01 crc kubenswrapper[4714]: I0129 16:38:01.738085 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/cp-frr-files/0.log" Jan 29 16:38:01 crc kubenswrapper[4714]: I0129 16:38:01.764995 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/cp-reloader/0.log" Jan 29 16:38:01 crc kubenswrapper[4714]: I0129 16:38:01.777148 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/cp-metrics/0.log" Jan 29 16:38:01 crc kubenswrapper[4714]: I0129 16:38:01.821609 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/cp-metrics/0.log" Jan 29 16:38:01 crc kubenswrapper[4714]: I0129 16:38:01.989901 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/cp-reloader/0.log" Jan 29 16:38:02 crc kubenswrapper[4714]: I0129 16:38:02.000313 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/cp-frr-files/0.log" Jan 29 16:38:02 crc kubenswrapper[4714]: I0129 16:38:02.042994 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/controller/0.log" Jan 29 16:38:02 crc kubenswrapper[4714]: I0129 16:38:02.064895 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/cp-metrics/0.log" Jan 29 16:38:02 crc kubenswrapper[4714]: I0129 16:38:02.183350 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/frr-metrics/0.log" Jan 29 16:38:02 crc kubenswrapper[4714]: I0129 16:38:02.214474 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/kube-rbac-proxy/0.log" Jan 29 16:38:02 crc kubenswrapper[4714]: I0129 16:38:02.300570 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/kube-rbac-proxy-frr/0.log" Jan 29 16:38:02 crc kubenswrapper[4714]: I0129 16:38:02.421580 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/reloader/0.log" Jan 29 16:38:02 crc kubenswrapper[4714]: I0129 16:38:02.508537 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-kk79r_9bbfcf92-8a27-4ba0-9017-7c36906791c8/frr-k8s-webhook-server/0.log" Jan 29 16:38:02 crc kubenswrapper[4714]: I0129 16:38:02.554095 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-59pmz_a97dd473-5873-4aa1-9166-f7a0c6581be1/frr/0.log" Jan 29 16:38:02 crc kubenswrapper[4714]: I0129 16:38:02.603703 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-586b87b897-zpr4q_432a4f98-877c-4f7a-b2b0-ce273a77450a/manager/0.log" Jan 29 16:38:02 crc kubenswrapper[4714]: I0129 16:38:02.714049 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7df7c8d444-xs67n_ffe179b8-a1c8-430b-94f5-920aacf0defe/webhook-server/0.log" Jan 29 16:38:02 crc kubenswrapper[4714]: I0129 16:38:02.832434 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7mmsh_813f735d-8336-49e9-b018-e6dbf74ddc99/kube-rbac-proxy/0.log" Jan 29 16:38:02 crc kubenswrapper[4714]: I0129 16:38:02.966695 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7mmsh_813f735d-8336-49e9-b018-e6dbf74ddc99/speaker/0.log" Jan 29 16:38:10 crc kubenswrapper[4714]: I0129 16:38:10.184312 4714 scope.go:117] "RemoveContainer" containerID="0f064189d4746edc06d2b74e60a2cbff7511efd665171516db63a8272ebb29e1" Jan 29 16:38:10 crc kubenswrapper[4714]: E0129 16:38:10.184999 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppngk_openshift-machine-config-operator(c8c765f3-89eb-4077-8829-03e86eb0c90c)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" Jan 29 16:38:24 crc kubenswrapper[4714]: I0129 16:38:24.187053 4714 scope.go:117] "RemoveContainer" containerID="0f064189d4746edc06d2b74e60a2cbff7511efd665171516db63a8272ebb29e1" Jan 29 16:38:24 crc kubenswrapper[4714]: E0129 16:38:24.187771 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppngk_openshift-machine-config-operator(c8c765f3-89eb-4077-8829-03e86eb0c90c)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" Jan 29 16:38:27 crc kubenswrapper[4714]: I0129 16:38:27.661892 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st_c184c6f2-1af5-4f70-9251-6beb2baae06b/util/0.log" Jan 29 16:38:27 crc kubenswrapper[4714]: I0129 16:38:27.872702 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st_c184c6f2-1af5-4f70-9251-6beb2baae06b/pull/0.log" Jan 29 16:38:27 crc kubenswrapper[4714]: I0129 16:38:27.878839 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st_c184c6f2-1af5-4f70-9251-6beb2baae06b/pull/0.log" Jan 29 16:38:27 crc kubenswrapper[4714]: I0129 16:38:27.922982 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st_c184c6f2-1af5-4f70-9251-6beb2baae06b/util/0.log" Jan 29 16:38:28 crc kubenswrapper[4714]: I0129 16:38:28.070011 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st_c184c6f2-1af5-4f70-9251-6beb2baae06b/pull/0.log" Jan 29 16:38:28 crc kubenswrapper[4714]: I0129 16:38:28.086140 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st_c184c6f2-1af5-4f70-9251-6beb2baae06b/extract/0.log" Jan 29 16:38:28 crc kubenswrapper[4714]: I0129 16:38:28.104407 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dclc9st_c184c6f2-1af5-4f70-9251-6beb2baae06b/util/0.log" Jan 29 16:38:28 crc kubenswrapper[4714]: I0129 16:38:28.251879 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-slsxz_16cb244c-6c63-47e6-a312-ba33ab4d4899/extract-utilities/0.log" Jan 29 16:38:28 crc kubenswrapper[4714]: I0129 16:38:28.430417 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-slsxz_16cb244c-6c63-47e6-a312-ba33ab4d4899/extract-content/0.log" Jan 29 16:38:28 crc kubenswrapper[4714]: I0129 16:38:28.434516 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-slsxz_16cb244c-6c63-47e6-a312-ba33ab4d4899/extract-content/0.log" Jan 29 16:38:28 crc kubenswrapper[4714]: I0129 16:38:28.434827 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-slsxz_16cb244c-6c63-47e6-a312-ba33ab4d4899/extract-utilities/0.log" Jan 29 16:38:28 crc kubenswrapper[4714]: I0129 16:38:28.631992 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-slsxz_16cb244c-6c63-47e6-a312-ba33ab4d4899/extract-utilities/0.log" Jan 29 16:38:28 crc kubenswrapper[4714]: I0129 16:38:28.649009 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-slsxz_16cb244c-6c63-47e6-a312-ba33ab4d4899/extract-content/0.log" Jan 29 16:38:28 crc kubenswrapper[4714]: I0129 16:38:28.817960 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ndx6p_ca655e22-8f97-4e9e-b115-734ae1af7d50/extract-utilities/0.log" Jan 29 16:38:28 crc kubenswrapper[4714]: I0129 16:38:28.977165 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ndx6p_ca655e22-8f97-4e9e-b115-734ae1af7d50/extract-utilities/0.log" Jan 29 16:38:28 crc kubenswrapper[4714]: I0129 16:38:28.985211 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ndx6p_ca655e22-8f97-4e9e-b115-734ae1af7d50/extract-content/0.log" Jan 29 16:38:29 crc kubenswrapper[4714]: I0129 16:38:29.043192 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-slsxz_16cb244c-6c63-47e6-a312-ba33ab4d4899/registry-server/0.log" Jan 29 16:38:29 crc kubenswrapper[4714]: I0129 16:38:29.089312 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ndx6p_ca655e22-8f97-4e9e-b115-734ae1af7d50/extract-content/0.log" Jan 29 16:38:29 crc kubenswrapper[4714]: I0129 16:38:29.182056 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ndx6p_ca655e22-8f97-4e9e-b115-734ae1af7d50/extract-utilities/0.log" Jan 29 16:38:29 crc kubenswrapper[4714]: I0129 16:38:29.182085 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ndx6p_ca655e22-8f97-4e9e-b115-734ae1af7d50/extract-content/0.log" Jan 29 16:38:29 crc kubenswrapper[4714]: I0129 16:38:29.373777 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7rvrl_2696757f-83ca-42df-9855-f76adeee02bb/marketplace-operator/0.log" Jan 29 16:38:29 crc kubenswrapper[4714]: I0129 16:38:29.483090 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6gkpz_04dba3a0-a89b-48c5-97ef-e5660d1ae7bb/extract-utilities/0.log" Jan 29 16:38:29 crc kubenswrapper[4714]: I0129 16:38:29.552731 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ndx6p_ca655e22-8f97-4e9e-b115-734ae1af7d50/registry-server/0.log" Jan 29 16:38:29 crc kubenswrapper[4714]: I0129 16:38:29.678451 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6gkpz_04dba3a0-a89b-48c5-97ef-e5660d1ae7bb/extract-utilities/0.log" Jan 29 16:38:29 crc kubenswrapper[4714]: I0129 16:38:29.686570 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6gkpz_04dba3a0-a89b-48c5-97ef-e5660d1ae7bb/extract-content/0.log" Jan 29 16:38:29 crc kubenswrapper[4714]: I0129 16:38:29.729897 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6gkpz_04dba3a0-a89b-48c5-97ef-e5660d1ae7bb/extract-content/0.log" Jan 29 16:38:29 crc kubenswrapper[4714]: I0129 16:38:29.852564 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6gkpz_04dba3a0-a89b-48c5-97ef-e5660d1ae7bb/extract-utilities/0.log" Jan 29 16:38:29 crc kubenswrapper[4714]: I0129 16:38:29.852564 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6gkpz_04dba3a0-a89b-48c5-97ef-e5660d1ae7bb/extract-content/0.log" Jan 29 16:38:29 crc kubenswrapper[4714]: I0129 16:38:29.959647 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6gkpz_04dba3a0-a89b-48c5-97ef-e5660d1ae7bb/registry-server/0.log" Jan 29 16:38:30 crc kubenswrapper[4714]: I0129 16:38:30.015760 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-knxc8_de6c9fbd-8657-4434-bff5-468276791466/extract-utilities/0.log" Jan 29 16:38:30 crc kubenswrapper[4714]: I0129 16:38:30.176002 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-knxc8_de6c9fbd-8657-4434-bff5-468276791466/extract-content/0.log" Jan 29 16:38:30 crc kubenswrapper[4714]: I0129 16:38:30.181216 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-knxc8_de6c9fbd-8657-4434-bff5-468276791466/extract-utilities/0.log" Jan 29 16:38:30 crc kubenswrapper[4714]: I0129 16:38:30.191205 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-knxc8_de6c9fbd-8657-4434-bff5-468276791466/extract-content/0.log" Jan 29 16:38:30 crc kubenswrapper[4714]: I0129 16:38:30.360054 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-knxc8_de6c9fbd-8657-4434-bff5-468276791466/extract-utilities/0.log" Jan 29 16:38:30 crc kubenswrapper[4714]: I0129 16:38:30.360360 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-knxc8_de6c9fbd-8657-4434-bff5-468276791466/extract-content/0.log" Jan 29 16:38:30 crc kubenswrapper[4714]: I0129 16:38:30.558424 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-knxc8_de6c9fbd-8657-4434-bff5-468276791466/registry-server/0.log" Jan 29 16:38:39 crc kubenswrapper[4714]: I0129 16:38:39.184127 4714 scope.go:117] "RemoveContainer" containerID="0f064189d4746edc06d2b74e60a2cbff7511efd665171516db63a8272ebb29e1" Jan 29 16:38:39 crc kubenswrapper[4714]: E0129 16:38:39.184817 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppngk_openshift-machine-config-operator(c8c765f3-89eb-4077-8829-03e86eb0c90c)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" Jan 29 16:38:50 crc kubenswrapper[4714]: I0129 16:38:50.184239 4714 scope.go:117] "RemoveContainer" containerID="0f064189d4746edc06d2b74e60a2cbff7511efd665171516db63a8272ebb29e1" Jan 29 16:38:50 crc kubenswrapper[4714]: E0129 16:38:50.185169 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppngk_openshift-machine-config-operator(c8c765f3-89eb-4077-8829-03e86eb0c90c)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" Jan 29 16:39:03 crc kubenswrapper[4714]: I0129 16:39:03.184868 4714 scope.go:117] "RemoveContainer" containerID="0f064189d4746edc06d2b74e60a2cbff7511efd665171516db63a8272ebb29e1" Jan 29 16:39:03 crc kubenswrapper[4714]: E0129 16:39:03.185913 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppngk_openshift-machine-config-operator(c8c765f3-89eb-4077-8829-03e86eb0c90c)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" Jan 29 16:39:15 crc kubenswrapper[4714]: I0129 16:39:15.184754 4714 scope.go:117] "RemoveContainer" containerID="0f064189d4746edc06d2b74e60a2cbff7511efd665171516db63a8272ebb29e1" Jan 29 16:39:15 crc kubenswrapper[4714]: E0129 16:39:15.185314 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppngk_openshift-machine-config-operator(c8c765f3-89eb-4077-8829-03e86eb0c90c)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" Jan 29 16:39:27 crc kubenswrapper[4714]: I0129 16:39:27.184239 4714 scope.go:117] "RemoveContainer" containerID="0f064189d4746edc06d2b74e60a2cbff7511efd665171516db63a8272ebb29e1" Jan 29 16:39:27 crc kubenswrapper[4714]: E0129 16:39:27.185426 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppngk_openshift-machine-config-operator(c8c765f3-89eb-4077-8829-03e86eb0c90c)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" Jan 29 16:39:38 crc kubenswrapper[4714]: I0129 16:39:38.184144 4714 scope.go:117] "RemoveContainer" containerID="0f064189d4746edc06d2b74e60a2cbff7511efd665171516db63a8272ebb29e1" Jan 29 16:39:38 crc kubenswrapper[4714]: E0129 16:39:38.184769 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppngk_openshift-machine-config-operator(c8c765f3-89eb-4077-8829-03e86eb0c90c)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" Jan 29 16:39:41 crc kubenswrapper[4714]: I0129 16:39:41.657243 4714 generic.go:334] "Generic (PLEG): container finished" podID="ff3dd7de-0ec3-4550-ba38-0d67405a2671" containerID="9fd93823c7fddefb604b23f3bb0a407aa212a848b3dc9ccb4d9664567595015f" exitCode=0 Jan 29 16:39:41 crc kubenswrapper[4714]: I0129 16:39:41.657328 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hqc4j/must-gather-684p8" event={"ID":"ff3dd7de-0ec3-4550-ba38-0d67405a2671","Type":"ContainerDied","Data":"9fd93823c7fddefb604b23f3bb0a407aa212a848b3dc9ccb4d9664567595015f"} Jan 29 16:39:41 crc kubenswrapper[4714]: I0129 16:39:41.658128 4714 scope.go:117] "RemoveContainer" containerID="9fd93823c7fddefb604b23f3bb0a407aa212a848b3dc9ccb4d9664567595015f" Jan 29 16:39:42 crc kubenswrapper[4714]: I0129 16:39:42.162224 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hqc4j_must-gather-684p8_ff3dd7de-0ec3-4550-ba38-0d67405a2671/gather/0.log" Jan 29 16:39:51 crc kubenswrapper[4714]: I0129 16:39:51.184467 4714 scope.go:117] "RemoveContainer" containerID="0f064189d4746edc06d2b74e60a2cbff7511efd665171516db63a8272ebb29e1" Jan 29 16:39:51 crc kubenswrapper[4714]: E0129 16:39:51.186695 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppngk_openshift-machine-config-operator(c8c765f3-89eb-4077-8829-03e86eb0c90c)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" Jan 29 16:39:51 crc kubenswrapper[4714]: I0129 16:39:51.557998 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hqc4j/must-gather-684p8"] Jan 29 16:39:51 crc kubenswrapper[4714]: I0129 16:39:51.558514 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-hqc4j/must-gather-684p8" podUID="ff3dd7de-0ec3-4550-ba38-0d67405a2671" containerName="copy" containerID="cri-o://3660194f8ee015e6e72c5a402b3814eb8d75a532bbd31644dbe43ccd17971ac5" gracePeriod=2 Jan 29 16:39:51 crc kubenswrapper[4714]: I0129 16:39:51.563746 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hqc4j/must-gather-684p8"] Jan 29 16:39:52 crc kubenswrapper[4714]: I0129 16:39:52.459589 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hqc4j_must-gather-684p8_ff3dd7de-0ec3-4550-ba38-0d67405a2671/copy/0.log" Jan 29 16:39:52 crc kubenswrapper[4714]: I0129 16:39:52.460499 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hqc4j/must-gather-684p8" Jan 29 16:39:52 crc kubenswrapper[4714]: I0129 16:39:52.490359 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ff3dd7de-0ec3-4550-ba38-0d67405a2671-must-gather-output\") pod \"ff3dd7de-0ec3-4550-ba38-0d67405a2671\" (UID: \"ff3dd7de-0ec3-4550-ba38-0d67405a2671\") " Jan 29 16:39:52 crc kubenswrapper[4714]: I0129 16:39:52.490564 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jf26\" (UniqueName: \"kubernetes.io/projected/ff3dd7de-0ec3-4550-ba38-0d67405a2671-kube-api-access-8jf26\") pod \"ff3dd7de-0ec3-4550-ba38-0d67405a2671\" (UID: \"ff3dd7de-0ec3-4550-ba38-0d67405a2671\") " Jan 29 16:39:52 crc kubenswrapper[4714]: I0129 16:39:52.495348 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff3dd7de-0ec3-4550-ba38-0d67405a2671-kube-api-access-8jf26" (OuterVolumeSpecName: "kube-api-access-8jf26") pod "ff3dd7de-0ec3-4550-ba38-0d67405a2671" (UID: "ff3dd7de-0ec3-4550-ba38-0d67405a2671"). InnerVolumeSpecName "kube-api-access-8jf26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:39:52 crc kubenswrapper[4714]: I0129 16:39:52.555027 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff3dd7de-0ec3-4550-ba38-0d67405a2671-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ff3dd7de-0ec3-4550-ba38-0d67405a2671" (UID: "ff3dd7de-0ec3-4550-ba38-0d67405a2671"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:39:52 crc kubenswrapper[4714]: I0129 16:39:52.592349 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jf26\" (UniqueName: \"kubernetes.io/projected/ff3dd7de-0ec3-4550-ba38-0d67405a2671-kube-api-access-8jf26\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:52 crc kubenswrapper[4714]: I0129 16:39:52.592402 4714 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ff3dd7de-0ec3-4550-ba38-0d67405a2671-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:52 crc kubenswrapper[4714]: I0129 16:39:52.733091 4714 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hqc4j_must-gather-684p8_ff3dd7de-0ec3-4550-ba38-0d67405a2671/copy/0.log" Jan 29 16:39:52 crc kubenswrapper[4714]: I0129 16:39:52.733796 4714 generic.go:334] "Generic (PLEG): container finished" podID="ff3dd7de-0ec3-4550-ba38-0d67405a2671" containerID="3660194f8ee015e6e72c5a402b3814eb8d75a532bbd31644dbe43ccd17971ac5" exitCode=143 Jan 29 16:39:52 crc kubenswrapper[4714]: I0129 16:39:52.733842 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hqc4j/must-gather-684p8" Jan 29 16:39:52 crc kubenswrapper[4714]: I0129 16:39:52.733857 4714 scope.go:117] "RemoveContainer" containerID="3660194f8ee015e6e72c5a402b3814eb8d75a532bbd31644dbe43ccd17971ac5" Jan 29 16:39:52 crc kubenswrapper[4714]: I0129 16:39:52.749527 4714 scope.go:117] "RemoveContainer" containerID="9fd93823c7fddefb604b23f3bb0a407aa212a848b3dc9ccb4d9664567595015f" Jan 29 16:39:52 crc kubenswrapper[4714]: I0129 16:39:52.804095 4714 scope.go:117] "RemoveContainer" containerID="3660194f8ee015e6e72c5a402b3814eb8d75a532bbd31644dbe43ccd17971ac5" Jan 29 16:39:52 crc kubenswrapper[4714]: E0129 16:39:52.804490 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3660194f8ee015e6e72c5a402b3814eb8d75a532bbd31644dbe43ccd17971ac5\": container with ID starting with 3660194f8ee015e6e72c5a402b3814eb8d75a532bbd31644dbe43ccd17971ac5 not found: ID does not exist" containerID="3660194f8ee015e6e72c5a402b3814eb8d75a532bbd31644dbe43ccd17971ac5" Jan 29 16:39:52 crc kubenswrapper[4714]: I0129 16:39:52.804531 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3660194f8ee015e6e72c5a402b3814eb8d75a532bbd31644dbe43ccd17971ac5"} err="failed to get container status \"3660194f8ee015e6e72c5a402b3814eb8d75a532bbd31644dbe43ccd17971ac5\": rpc error: code = NotFound desc = could not find container \"3660194f8ee015e6e72c5a402b3814eb8d75a532bbd31644dbe43ccd17971ac5\": container with ID starting with 3660194f8ee015e6e72c5a402b3814eb8d75a532bbd31644dbe43ccd17971ac5 not found: ID does not exist" Jan 29 16:39:52 crc kubenswrapper[4714]: I0129 16:39:52.804557 4714 scope.go:117] "RemoveContainer" containerID="9fd93823c7fddefb604b23f3bb0a407aa212a848b3dc9ccb4d9664567595015f" Jan 29 16:39:52 crc kubenswrapper[4714]: E0129 16:39:52.805006 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fd93823c7fddefb604b23f3bb0a407aa212a848b3dc9ccb4d9664567595015f\": container with ID starting with 9fd93823c7fddefb604b23f3bb0a407aa212a848b3dc9ccb4d9664567595015f not found: ID does not exist" containerID="9fd93823c7fddefb604b23f3bb0a407aa212a848b3dc9ccb4d9664567595015f" Jan 29 16:39:52 crc kubenswrapper[4714]: I0129 16:39:52.805041 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fd93823c7fddefb604b23f3bb0a407aa212a848b3dc9ccb4d9664567595015f"} err="failed to get container status \"9fd93823c7fddefb604b23f3bb0a407aa212a848b3dc9ccb4d9664567595015f\": rpc error: code = NotFound desc = could not find container \"9fd93823c7fddefb604b23f3bb0a407aa212a848b3dc9ccb4d9664567595015f\": container with ID starting with 9fd93823c7fddefb604b23f3bb0a407aa212a848b3dc9ccb4d9664567595015f not found: ID does not exist" Jan 29 16:39:54 crc kubenswrapper[4714]: I0129 16:39:54.201342 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff3dd7de-0ec3-4550-ba38-0d67405a2671" path="/var/lib/kubelet/pods/ff3dd7de-0ec3-4550-ba38-0d67405a2671/volumes" Jan 29 16:40:06 crc kubenswrapper[4714]: I0129 16:40:06.184714 4714 scope.go:117] "RemoveContainer" containerID="0f064189d4746edc06d2b74e60a2cbff7511efd665171516db63a8272ebb29e1" Jan 29 16:40:06 crc kubenswrapper[4714]: E0129 16:40:06.185900 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppngk_openshift-machine-config-operator(c8c765f3-89eb-4077-8829-03e86eb0c90c)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" Jan 29 16:40:18 crc kubenswrapper[4714]: I0129 16:40:18.184499 4714 scope.go:117] "RemoveContainer" containerID="0f064189d4746edc06d2b74e60a2cbff7511efd665171516db63a8272ebb29e1" Jan 29 16:40:18 crc kubenswrapper[4714]: E0129 16:40:18.185391 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppngk_openshift-machine-config-operator(c8c765f3-89eb-4077-8829-03e86eb0c90c)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" Jan 29 16:40:31 crc kubenswrapper[4714]: I0129 16:40:31.185129 4714 scope.go:117] "RemoveContainer" containerID="0f064189d4746edc06d2b74e60a2cbff7511efd665171516db63a8272ebb29e1" Jan 29 16:40:31 crc kubenswrapper[4714]: E0129 16:40:31.186449 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppngk_openshift-machine-config-operator(c8c765f3-89eb-4077-8829-03e86eb0c90c)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" Jan 29 16:40:41 crc kubenswrapper[4714]: I0129 16:40:41.625224 4714 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x6vt6"] Jan 29 16:40:41 crc kubenswrapper[4714]: E0129 16:40:41.626496 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff3dd7de-0ec3-4550-ba38-0d67405a2671" containerName="gather" Jan 29 16:40:41 crc kubenswrapper[4714]: I0129 16:40:41.626520 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff3dd7de-0ec3-4550-ba38-0d67405a2671" containerName="gather" Jan 29 16:40:41 crc kubenswrapper[4714]: E0129 16:40:41.626540 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f29ce50e-bbd2-4fe5-a032-5163f3f80ca0" containerName="extract-utilities" Jan 29 16:40:41 crc kubenswrapper[4714]: I0129 16:40:41.626553 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="f29ce50e-bbd2-4fe5-a032-5163f3f80ca0" containerName="extract-utilities" Jan 29 16:40:41 crc kubenswrapper[4714]: E0129 16:40:41.626577 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6b3552-9d7a-459f-a4c4-67da40c67c15" containerName="registry-server" Jan 29 16:40:41 crc kubenswrapper[4714]: I0129 16:40:41.626592 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6b3552-9d7a-459f-a4c4-67da40c67c15" containerName="registry-server" Jan 29 16:40:41 crc kubenswrapper[4714]: E0129 16:40:41.626605 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff3dd7de-0ec3-4550-ba38-0d67405a2671" containerName="copy" Jan 29 16:40:41 crc kubenswrapper[4714]: I0129 16:40:41.626618 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff3dd7de-0ec3-4550-ba38-0d67405a2671" containerName="copy" Jan 29 16:40:41 crc kubenswrapper[4714]: E0129 16:40:41.626642 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f29ce50e-bbd2-4fe5-a032-5163f3f80ca0" containerName="extract-content" Jan 29 16:40:41 crc kubenswrapper[4714]: I0129 16:40:41.626655 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="f29ce50e-bbd2-4fe5-a032-5163f3f80ca0" containerName="extract-content" Jan 29 16:40:41 crc kubenswrapper[4714]: E0129 16:40:41.626682 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6b3552-9d7a-459f-a4c4-67da40c67c15" containerName="extract-utilities" Jan 29 16:40:41 crc kubenswrapper[4714]: I0129 16:40:41.626695 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6b3552-9d7a-459f-a4c4-67da40c67c15" containerName="extract-utilities" Jan 29 16:40:41 crc kubenswrapper[4714]: E0129 16:40:41.626724 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6b3552-9d7a-459f-a4c4-67da40c67c15" containerName="extract-content" Jan 29 16:40:41 crc kubenswrapper[4714]: I0129 16:40:41.626736 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6b3552-9d7a-459f-a4c4-67da40c67c15" containerName="extract-content" Jan 29 16:40:41 crc kubenswrapper[4714]: E0129 16:40:41.626755 4714 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f29ce50e-bbd2-4fe5-a032-5163f3f80ca0" containerName="registry-server" Jan 29 16:40:41 crc kubenswrapper[4714]: I0129 16:40:41.626769 4714 state_mem.go:107] "Deleted CPUSet assignment" podUID="f29ce50e-bbd2-4fe5-a032-5163f3f80ca0" containerName="registry-server" Jan 29 16:40:41 crc kubenswrapper[4714]: I0129 16:40:41.627012 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff3dd7de-0ec3-4550-ba38-0d67405a2671" containerName="gather" Jan 29 16:40:41 crc kubenswrapper[4714]: I0129 16:40:41.627040 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="f29ce50e-bbd2-4fe5-a032-5163f3f80ca0" containerName="registry-server" Jan 29 16:40:41 crc kubenswrapper[4714]: I0129 16:40:41.627058 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e6b3552-9d7a-459f-a4c4-67da40c67c15" containerName="registry-server" Jan 29 16:40:41 crc kubenswrapper[4714]: I0129 16:40:41.627071 4714 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff3dd7de-0ec3-4550-ba38-0d67405a2671" containerName="copy" Jan 29 16:40:41 crc kubenswrapper[4714]: I0129 16:40:41.628527 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x6vt6" Jan 29 16:40:41 crc kubenswrapper[4714]: I0129 16:40:41.643169 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x6vt6"] Jan 29 16:40:41 crc kubenswrapper[4714]: I0129 16:40:41.702213 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb90f87-76b4-4b43-9242-5640128dbed9-utilities\") pod \"redhat-operators-x6vt6\" (UID: \"4bb90f87-76b4-4b43-9242-5640128dbed9\") " pod="openshift-marketplace/redhat-operators-x6vt6" Jan 29 16:40:41 crc kubenswrapper[4714]: I0129 16:40:41.702291 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb90f87-76b4-4b43-9242-5640128dbed9-catalog-content\") pod \"redhat-operators-x6vt6\" (UID: \"4bb90f87-76b4-4b43-9242-5640128dbed9\") " pod="openshift-marketplace/redhat-operators-x6vt6" Jan 29 16:40:41 crc kubenswrapper[4714]: I0129 16:40:41.702338 4714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f5nw\" (UniqueName: \"kubernetes.io/projected/4bb90f87-76b4-4b43-9242-5640128dbed9-kube-api-access-9f5nw\") pod \"redhat-operators-x6vt6\" (UID: \"4bb90f87-76b4-4b43-9242-5640128dbed9\") " pod="openshift-marketplace/redhat-operators-x6vt6" Jan 29 16:40:41 crc kubenswrapper[4714]: I0129 16:40:41.803247 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb90f87-76b4-4b43-9242-5640128dbed9-utilities\") pod \"redhat-operators-x6vt6\" (UID: \"4bb90f87-76b4-4b43-9242-5640128dbed9\") " pod="openshift-marketplace/redhat-operators-x6vt6" Jan 29 16:40:41 crc kubenswrapper[4714]: I0129 16:40:41.803318 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb90f87-76b4-4b43-9242-5640128dbed9-catalog-content\") pod \"redhat-operators-x6vt6\" (UID: \"4bb90f87-76b4-4b43-9242-5640128dbed9\") " pod="openshift-marketplace/redhat-operators-x6vt6" Jan 29 16:40:41 crc kubenswrapper[4714]: I0129 16:40:41.803366 4714 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f5nw\" (UniqueName: \"kubernetes.io/projected/4bb90f87-76b4-4b43-9242-5640128dbed9-kube-api-access-9f5nw\") pod \"redhat-operators-x6vt6\" (UID: \"4bb90f87-76b4-4b43-9242-5640128dbed9\") " pod="openshift-marketplace/redhat-operators-x6vt6" Jan 29 16:40:41 crc kubenswrapper[4714]: I0129 16:40:41.804126 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb90f87-76b4-4b43-9242-5640128dbed9-utilities\") pod \"redhat-operators-x6vt6\" (UID: \"4bb90f87-76b4-4b43-9242-5640128dbed9\") " pod="openshift-marketplace/redhat-operators-x6vt6" Jan 29 16:40:41 crc kubenswrapper[4714]: I0129 16:40:41.804433 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb90f87-76b4-4b43-9242-5640128dbed9-catalog-content\") pod \"redhat-operators-x6vt6\" (UID: \"4bb90f87-76b4-4b43-9242-5640128dbed9\") " pod="openshift-marketplace/redhat-operators-x6vt6" Jan 29 16:40:41 crc kubenswrapper[4714]: I0129 16:40:41.826883 4714 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f5nw\" (UniqueName: \"kubernetes.io/projected/4bb90f87-76b4-4b43-9242-5640128dbed9-kube-api-access-9f5nw\") pod \"redhat-operators-x6vt6\" (UID: \"4bb90f87-76b4-4b43-9242-5640128dbed9\") " pod="openshift-marketplace/redhat-operators-x6vt6" Jan 29 16:40:41 crc kubenswrapper[4714]: I0129 16:40:41.966846 4714 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x6vt6" Jan 29 16:40:42 crc kubenswrapper[4714]: I0129 16:40:42.173744 4714 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x6vt6"] Jan 29 16:40:43 crc kubenswrapper[4714]: I0129 16:40:43.068321 4714 generic.go:334] "Generic (PLEG): container finished" podID="4bb90f87-76b4-4b43-9242-5640128dbed9" containerID="e74a0b3f97545922aa02f04f1f78c55567e52209ac7863ae6e2cc02588e55ab7" exitCode=0 Jan 29 16:40:43 crc kubenswrapper[4714]: I0129 16:40:43.068366 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6vt6" event={"ID":"4bb90f87-76b4-4b43-9242-5640128dbed9","Type":"ContainerDied","Data":"e74a0b3f97545922aa02f04f1f78c55567e52209ac7863ae6e2cc02588e55ab7"} Jan 29 16:40:43 crc kubenswrapper[4714]: I0129 16:40:43.068394 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6vt6" event={"ID":"4bb90f87-76b4-4b43-9242-5640128dbed9","Type":"ContainerStarted","Data":"491231bcd133cd91582b60d1cec8a74ce694970727fe9817d84874a1355e3195"} Jan 29 16:40:45 crc kubenswrapper[4714]: I0129 16:40:45.083228 4714 generic.go:334] "Generic (PLEG): container finished" podID="4bb90f87-76b4-4b43-9242-5640128dbed9" containerID="e0be61f9d330e704f6ad4da95588bd54481951c4968e8ce1ae9312e631fe96f2" exitCode=0 Jan 29 16:40:45 crc kubenswrapper[4714]: I0129 16:40:45.083331 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6vt6" event={"ID":"4bb90f87-76b4-4b43-9242-5640128dbed9","Type":"ContainerDied","Data":"e0be61f9d330e704f6ad4da95588bd54481951c4968e8ce1ae9312e631fe96f2"} Jan 29 16:40:46 crc kubenswrapper[4714]: I0129 16:40:46.092334 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6vt6" event={"ID":"4bb90f87-76b4-4b43-9242-5640128dbed9","Type":"ContainerStarted","Data":"2cd4a929c1ff0695bb4467a8ea29639e901901262b6e43af866569c0d36a51c6"} Jan 29 16:40:46 crc kubenswrapper[4714]: I0129 16:40:46.117140 4714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x6vt6" podStartSLOduration=2.447135732 podStartE2EDuration="5.117120243s" podCreationTimestamp="2026-01-29 16:40:41 +0000 UTC" firstStartedPulling="2026-01-29 16:40:43.070835758 +0000 UTC m=+1849.591336888" lastFinishedPulling="2026-01-29 16:40:45.740820249 +0000 UTC m=+1852.261321399" observedRunningTime="2026-01-29 16:40:46.1138142 +0000 UTC m=+1852.634315330" watchObservedRunningTime="2026-01-29 16:40:46.117120243 +0000 UTC m=+1852.637621363" Jan 29 16:40:46 crc kubenswrapper[4714]: I0129 16:40:46.185270 4714 scope.go:117] "RemoveContainer" containerID="0f064189d4746edc06d2b74e60a2cbff7511efd665171516db63a8272ebb29e1" Jan 29 16:40:46 crc kubenswrapper[4714]: E0129 16:40:46.185523 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppngk_openshift-machine-config-operator(c8c765f3-89eb-4077-8829-03e86eb0c90c)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" Jan 29 16:40:51 crc kubenswrapper[4714]: I0129 16:40:51.967562 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x6vt6" Jan 29 16:40:51 crc kubenswrapper[4714]: I0129 16:40:51.968847 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x6vt6" Jan 29 16:40:53 crc kubenswrapper[4714]: I0129 16:40:53.046617 4714 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x6vt6" podUID="4bb90f87-76b4-4b43-9242-5640128dbed9" containerName="registry-server" probeResult="failure" output=< Jan 29 16:40:53 crc kubenswrapper[4714]: timeout: failed to connect service ":50051" within 1s Jan 29 16:40:53 crc kubenswrapper[4714]: > Jan 29 16:40:58 crc kubenswrapper[4714]: I0129 16:40:58.184029 4714 scope.go:117] "RemoveContainer" containerID="0f064189d4746edc06d2b74e60a2cbff7511efd665171516db63a8272ebb29e1" Jan 29 16:40:58 crc kubenswrapper[4714]: E0129 16:40:58.184509 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppngk_openshift-machine-config-operator(c8c765f3-89eb-4077-8829-03e86eb0c90c)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" Jan 29 16:41:02 crc kubenswrapper[4714]: I0129 16:41:02.001778 4714 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x6vt6" Jan 29 16:41:02 crc kubenswrapper[4714]: I0129 16:41:02.039966 4714 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x6vt6" Jan 29 16:41:02 crc kubenswrapper[4714]: I0129 16:41:02.530543 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x6vt6"] Jan 29 16:41:03 crc kubenswrapper[4714]: I0129 16:41:03.218983 4714 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x6vt6" podUID="4bb90f87-76b4-4b43-9242-5640128dbed9" containerName="registry-server" containerID="cri-o://2cd4a929c1ff0695bb4467a8ea29639e901901262b6e43af866569c0d36a51c6" gracePeriod=2 Jan 29 16:41:03 crc kubenswrapper[4714]: I0129 16:41:03.652042 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x6vt6" Jan 29 16:41:03 crc kubenswrapper[4714]: I0129 16:41:03.821828 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb90f87-76b4-4b43-9242-5640128dbed9-utilities\") pod \"4bb90f87-76b4-4b43-9242-5640128dbed9\" (UID: \"4bb90f87-76b4-4b43-9242-5640128dbed9\") " Jan 29 16:41:03 crc kubenswrapper[4714]: I0129 16:41:03.821872 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f5nw\" (UniqueName: \"kubernetes.io/projected/4bb90f87-76b4-4b43-9242-5640128dbed9-kube-api-access-9f5nw\") pod \"4bb90f87-76b4-4b43-9242-5640128dbed9\" (UID: \"4bb90f87-76b4-4b43-9242-5640128dbed9\") " Jan 29 16:41:03 crc kubenswrapper[4714]: I0129 16:41:03.821893 4714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb90f87-76b4-4b43-9242-5640128dbed9-catalog-content\") pod \"4bb90f87-76b4-4b43-9242-5640128dbed9\" (UID: \"4bb90f87-76b4-4b43-9242-5640128dbed9\") " Jan 29 16:41:03 crc kubenswrapper[4714]: I0129 16:41:03.823125 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bb90f87-76b4-4b43-9242-5640128dbed9-utilities" (OuterVolumeSpecName: "utilities") pod "4bb90f87-76b4-4b43-9242-5640128dbed9" (UID: "4bb90f87-76b4-4b43-9242-5640128dbed9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:41:03 crc kubenswrapper[4714]: I0129 16:41:03.830986 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb90f87-76b4-4b43-9242-5640128dbed9-kube-api-access-9f5nw" (OuterVolumeSpecName: "kube-api-access-9f5nw") pod "4bb90f87-76b4-4b43-9242-5640128dbed9" (UID: "4bb90f87-76b4-4b43-9242-5640128dbed9"). InnerVolumeSpecName "kube-api-access-9f5nw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:41:03 crc kubenswrapper[4714]: I0129 16:41:03.924509 4714 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb90f87-76b4-4b43-9242-5640128dbed9-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:41:03 crc kubenswrapper[4714]: I0129 16:41:03.924573 4714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f5nw\" (UniqueName: \"kubernetes.io/projected/4bb90f87-76b4-4b43-9242-5640128dbed9-kube-api-access-9f5nw\") on node \"crc\" DevicePath \"\"" Jan 29 16:41:03 crc kubenswrapper[4714]: I0129 16:41:03.998194 4714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bb90f87-76b4-4b43-9242-5640128dbed9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4bb90f87-76b4-4b43-9242-5640128dbed9" (UID: "4bb90f87-76b4-4b43-9242-5640128dbed9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:41:04 crc kubenswrapper[4714]: I0129 16:41:04.025241 4714 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb90f87-76b4-4b43-9242-5640128dbed9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:41:04 crc kubenswrapper[4714]: I0129 16:41:04.231648 4714 generic.go:334] "Generic (PLEG): container finished" podID="4bb90f87-76b4-4b43-9242-5640128dbed9" containerID="2cd4a929c1ff0695bb4467a8ea29639e901901262b6e43af866569c0d36a51c6" exitCode=0 Jan 29 16:41:04 crc kubenswrapper[4714]: I0129 16:41:04.231699 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6vt6" event={"ID":"4bb90f87-76b4-4b43-9242-5640128dbed9","Type":"ContainerDied","Data":"2cd4a929c1ff0695bb4467a8ea29639e901901262b6e43af866569c0d36a51c6"} Jan 29 16:41:04 crc kubenswrapper[4714]: I0129 16:41:04.231732 4714 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6vt6" event={"ID":"4bb90f87-76b4-4b43-9242-5640128dbed9","Type":"ContainerDied","Data":"491231bcd133cd91582b60d1cec8a74ce694970727fe9817d84874a1355e3195"} Jan 29 16:41:04 crc kubenswrapper[4714]: I0129 16:41:04.231756 4714 scope.go:117] "RemoveContainer" containerID="2cd4a929c1ff0695bb4467a8ea29639e901901262b6e43af866569c0d36a51c6" Jan 29 16:41:04 crc kubenswrapper[4714]: I0129 16:41:04.232128 4714 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x6vt6" Jan 29 16:41:04 crc kubenswrapper[4714]: I0129 16:41:04.255539 4714 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x6vt6"] Jan 29 16:41:04 crc kubenswrapper[4714]: I0129 16:41:04.259555 4714 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x6vt6"] Jan 29 16:41:04 crc kubenswrapper[4714]: I0129 16:41:04.272055 4714 scope.go:117] "RemoveContainer" containerID="e0be61f9d330e704f6ad4da95588bd54481951c4968e8ce1ae9312e631fe96f2" Jan 29 16:41:04 crc kubenswrapper[4714]: I0129 16:41:04.292478 4714 scope.go:117] "RemoveContainer" containerID="e74a0b3f97545922aa02f04f1f78c55567e52209ac7863ae6e2cc02588e55ab7" Jan 29 16:41:04 crc kubenswrapper[4714]: I0129 16:41:04.315267 4714 scope.go:117] "RemoveContainer" containerID="2cd4a929c1ff0695bb4467a8ea29639e901901262b6e43af866569c0d36a51c6" Jan 29 16:41:04 crc kubenswrapper[4714]: E0129 16:41:04.315826 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cd4a929c1ff0695bb4467a8ea29639e901901262b6e43af866569c0d36a51c6\": container with ID starting with 2cd4a929c1ff0695bb4467a8ea29639e901901262b6e43af866569c0d36a51c6 not found: ID does not exist" containerID="2cd4a929c1ff0695bb4467a8ea29639e901901262b6e43af866569c0d36a51c6" Jan 29 16:41:04 crc kubenswrapper[4714]: I0129 16:41:04.315876 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cd4a929c1ff0695bb4467a8ea29639e901901262b6e43af866569c0d36a51c6"} err="failed to get container status \"2cd4a929c1ff0695bb4467a8ea29639e901901262b6e43af866569c0d36a51c6\": rpc error: code = NotFound desc = could not find container \"2cd4a929c1ff0695bb4467a8ea29639e901901262b6e43af866569c0d36a51c6\": container with ID starting with 2cd4a929c1ff0695bb4467a8ea29639e901901262b6e43af866569c0d36a51c6 not found: ID does not exist" Jan 29 16:41:04 crc kubenswrapper[4714]: I0129 16:41:04.315919 4714 scope.go:117] "RemoveContainer" containerID="e0be61f9d330e704f6ad4da95588bd54481951c4968e8ce1ae9312e631fe96f2" Jan 29 16:41:04 crc kubenswrapper[4714]: E0129 16:41:04.316257 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0be61f9d330e704f6ad4da95588bd54481951c4968e8ce1ae9312e631fe96f2\": container with ID starting with e0be61f9d330e704f6ad4da95588bd54481951c4968e8ce1ae9312e631fe96f2 not found: ID does not exist" containerID="e0be61f9d330e704f6ad4da95588bd54481951c4968e8ce1ae9312e631fe96f2" Jan 29 16:41:04 crc kubenswrapper[4714]: I0129 16:41:04.316281 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0be61f9d330e704f6ad4da95588bd54481951c4968e8ce1ae9312e631fe96f2"} err="failed to get container status \"e0be61f9d330e704f6ad4da95588bd54481951c4968e8ce1ae9312e631fe96f2\": rpc error: code = NotFound desc = could not find container \"e0be61f9d330e704f6ad4da95588bd54481951c4968e8ce1ae9312e631fe96f2\": container with ID starting with e0be61f9d330e704f6ad4da95588bd54481951c4968e8ce1ae9312e631fe96f2 not found: ID does not exist" Jan 29 16:41:04 crc kubenswrapper[4714]: I0129 16:41:04.316294 4714 scope.go:117] "RemoveContainer" containerID="e74a0b3f97545922aa02f04f1f78c55567e52209ac7863ae6e2cc02588e55ab7" Jan 29 16:41:04 crc kubenswrapper[4714]: E0129 16:41:04.316521 4714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e74a0b3f97545922aa02f04f1f78c55567e52209ac7863ae6e2cc02588e55ab7\": container with ID starting with e74a0b3f97545922aa02f04f1f78c55567e52209ac7863ae6e2cc02588e55ab7 not found: ID does not exist" containerID="e74a0b3f97545922aa02f04f1f78c55567e52209ac7863ae6e2cc02588e55ab7" Jan 29 16:41:04 crc kubenswrapper[4714]: I0129 16:41:04.316554 4714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e74a0b3f97545922aa02f04f1f78c55567e52209ac7863ae6e2cc02588e55ab7"} err="failed to get container status \"e74a0b3f97545922aa02f04f1f78c55567e52209ac7863ae6e2cc02588e55ab7\": rpc error: code = NotFound desc = could not find container \"e74a0b3f97545922aa02f04f1f78c55567e52209ac7863ae6e2cc02588e55ab7\": container with ID starting with e74a0b3f97545922aa02f04f1f78c55567e52209ac7863ae6e2cc02588e55ab7 not found: ID does not exist" Jan 29 16:41:06 crc kubenswrapper[4714]: I0129 16:41:06.198474 4714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb90f87-76b4-4b43-9242-5640128dbed9" path="/var/lib/kubelet/pods/4bb90f87-76b4-4b43-9242-5640128dbed9/volumes" Jan 29 16:41:10 crc kubenswrapper[4714]: I0129 16:41:10.184122 4714 scope.go:117] "RemoveContainer" containerID="0f064189d4746edc06d2b74e60a2cbff7511efd665171516db63a8272ebb29e1" Jan 29 16:41:10 crc kubenswrapper[4714]: E0129 16:41:10.184953 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppngk_openshift-machine-config-operator(c8c765f3-89eb-4077-8829-03e86eb0c90c)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" Jan 29 16:41:24 crc kubenswrapper[4714]: I0129 16:41:24.190685 4714 scope.go:117] "RemoveContainer" containerID="0f064189d4746edc06d2b74e60a2cbff7511efd665171516db63a8272ebb29e1" Jan 29 16:41:24 crc kubenswrapper[4714]: E0129 16:41:24.191829 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppngk_openshift-machine-config-operator(c8c765f3-89eb-4077-8829-03e86eb0c90c)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" Jan 29 16:41:39 crc kubenswrapper[4714]: I0129 16:41:39.183779 4714 scope.go:117] "RemoveContainer" containerID="0f064189d4746edc06d2b74e60a2cbff7511efd665171516db63a8272ebb29e1" Jan 29 16:41:39 crc kubenswrapper[4714]: E0129 16:41:39.185021 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppngk_openshift-machine-config-operator(c8c765f3-89eb-4077-8829-03e86eb0c90c)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" Jan 29 16:41:51 crc kubenswrapper[4714]: I0129 16:41:51.184263 4714 scope.go:117] "RemoveContainer" containerID="0f064189d4746edc06d2b74e60a2cbff7511efd665171516db63a8272ebb29e1" Jan 29 16:41:51 crc kubenswrapper[4714]: E0129 16:41:51.185542 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppngk_openshift-machine-config-operator(c8c765f3-89eb-4077-8829-03e86eb0c90c)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c" Jan 29 16:42:02 crc kubenswrapper[4714]: I0129 16:42:02.184638 4714 scope.go:117] "RemoveContainer" containerID="0f064189d4746edc06d2b74e60a2cbff7511efd665171516db63a8272ebb29e1" Jan 29 16:42:02 crc kubenswrapper[4714]: E0129 16:42:02.185641 4714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ppngk_openshift-machine-config-operator(c8c765f3-89eb-4077-8829-03e86eb0c90c)\"" pod="openshift-machine-config-operator/machine-config-daemon-ppngk" podUID="c8c765f3-89eb-4077-8829-03e86eb0c90c"